var/home/core/zuul-output/0000755000175000017500000000000015134406541014530 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015134420054015467 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000155727415134420011020262 0ustar corecore rikubelet.log][s8~_˾4er:vӱJuA$$aL /_ %ˉ:p];=5I,>rG:kլvF3'mJ}k8K^# R5<>,XZ0|GnZA*k`.YIetR,ZY/=uqqB|Awֹ\x|pn]ǣ ; ^߉2Fy:Sw8Z_<|+an+Qj6v>7c%zEx Gs}.__M/@U,Cq]tXxTmZܼ3+8> rU^KadRIrR~al"iJYMT&;(>ٛ[X9Ϧ9/@,zP|T.rmuzpnu & ŃE8A|Z&N̘!C],=^|03jnw8Aq@3(6ni䜱94JM! x3nUCczK:I_mNp=y/@ieTz󻜯djsnqDΩXh nPK\K(-p9|X~s1YWo_3(RpjjqRX\#R'2L\d آjO˦6AA'Bފl{ue DzzR3XQd{gK\txNTcW·6Dp8jE;.;N}πn}KQ #`kD!Wpn_ );]V3Y 8ܰl M"CI Ơڱt~e)^0 |^ Vn@Ve*kQYDDa;L".鶘 MsykAQ]t]~.Vm@mj忩CjY/bTz#hfA7XëĢ>TA16+3hC*0]QM?H% 01Lka{I?C؞ a&\k2h躃UD4D(Wa Cw u@cG1g H!ڴz0F.h+X4Z,B? "A ˬ#6L:BnF1D~$)\ƈzScVki`5`h8eƘkQC4>@cgƌ .J}"ݳ"+JQ9JcP7Hau/?Q[D^ ʼ!s 2sl3X@NQ([Y)\^Q\D 7 ]\DyJ8YeE{ֻ똋(̳{օ4[b._t3A,a@-ˍ|:krg 0ܱinaK?m-e G_ˈKp+x񠮶dDCfs#c g$ă;F"{*nW[8!#S9qRJeW*= @o_"iqYߓkbˬ0q0c`SַIY-C>P+bRpf~Ur <9$x*LBJZ733cEC;fa958N*_6ZV(b,%8oQu^: ύKܽ`Η6MomL')+t`BB[HKvQ*$~$' h/~(aI]ӆn(Ao%Q=oo6k]W}>-t$)S[Y; w >jz3H])EJ+d dV:4#8`#\f] hYyTbu G-Z>K(9:T!O;NF9R|nDmjL`,USk!t#Z z e ]6ܵFfL"Ks3yzHlx<\Wh{fJhLZ>niz_㣸u]ߴ::>+uBIJa*0e_Y3G \oO^6 z`yYXLm묰4+6IT3=fQm"Od}Je݆FAhl`NSېWv5rq@oy=z $|JM~Dؗ|"W utKׅ\3֟9gu7rP7Z;eE:|O[ٙΚj (u TNS 8KزF}~ϕB OhlU|S1֞J~PX8 [Na&s =ؔmm&bʻorW1EY6YBΜ;yraC\Q_và4z_,Ƥ%#c/cDc9,4`V.~(U&v=']f"8{[q]=־?VCU)ӓr)2d!;#1D=¹]ܦHy-v-Klp+J* c WSLjAïX`sڬ3;vs}YcẪXZrF|W. Qz\ ݙDJ8瓀 ?)1c%qƄP*S~{;heV=GO-:ʆA~6+ |V`At吣ngqb答* H}qodtP;JM[8v݋؟P2/Ak#80  6[k z,\©lmJ]}[O{R}ѣC8\'|!A;ڇ+/\剰cRm]j\W*J6׆#IX`TSB;Z$QbcXX|S3Etpd>.ўۄplU mR8`i4Ȥ?_iGW ^U7Jj 7X0^rWĭ}y: Mc{zau&DA14=d^K.OHЈ}QVqIza'Q,JI?t/Of~@q#}Wܬ$:C2H`0ouFux  Y̓OWЖ൸2Q*Twt|SeWp |(]q KYL'r}3ҁA=6wN;?j}:nX܊-f /?~!P>Ǎq36M@Xl0M1 34$B!{[FD_ @t6t PAm b7zVFѢ DrA 6zC `s _vG!zQv?}~ԚHJ1mT7?K7/dRfrg|Zh k<\d,v",T?j{vn`h*r-,t.ķyB4ߨA5 u4w[xG}cy.kU@Xln r*  {N.,-{[gl"EQ{WA=`/p[j48zrzkwn\ zVV[DNj.͖!uBP]p&ɛf_FAxϖ]o$=j =QFAiQBPw hmڣ ɻmm L]M8Vs &{Z&n`Cj |ٮwt-dW-}u !wȺig$azR#o>5|\E gyfu2R>y> ¿sxxLTM-BwrsMmZ_Ն{ ?FEYq @GN~CE|Dg*@M|}Hɐʆ鵸yӶ@!GzP2ڛϪz!">kR"cj|ƑBݻ%w&m7ئ%{E Ktm$;i^͐,9JԶ~ ÙѾXZ%(i4:EMK=!4s_wN?yf!i,Yx{>⍅yoGWoľ jW>ΓnO~|G2#Cu G`~]Q}^% ,#!\/ aZ_e髷|EQu<,b{oqsIL"ShJZ[_Ɨ'җR~#hZ&yY%;]-,d-e>iR?}}\';SsORCF E>wpd]ҡubkT$JJE޼?h'O'{~QGԤ-E/jm 4_Z gbEf\uE!':`~ E_TN [a>C3\J"8r0*A Qvن)M{y[C}z/rC(VwLsBP)&p^ Q/ 0B,#ZP}q/DK!ruTc"ߴ;@mا=^ީ `XXjJX<`k5b"(3@\ Ќպ՗/~ (4{tD &g$dx4gA#b2'aL=gGP!6f`\7* Hh L̐82E8' l%Gݬ9f,!Aoc5] y3[ϡO@q|9ڶIֆ3є۠ @1'/bŨL0OqBh[3̳a:Jesc dͨo. t[ P0^S_|FC|f\3X`E x^8@pu982DZ 1(Y1xt0 ݧ s1oA0Qqy@kỦ&Sb1F3't^f0Ƙ1G#rAL'zoޘЗD>]e N/s0kքuALцǺX.U 8Dål?Ańj=C1X֯c)Y Cj;4Fs./>JK'.ao,zaK9i4o061;~~Iy& +O%%%UCYv (6ǝ|yt YCj69T)*Y7K 2#|i hԶKJ:5޹Ǐ ꣤O#0D8 b(`l`]7Kɼq,qQ68΀-3TM<>R4m y |فcZ^3nwHGaP8(%Y|3Cy_=qr^Mf>I%Ť"c4>\l%MukB1LUY"Yaj@U kr ̮nyxI {ȿ ܥ ުjQ)³nU`>TA-nO!Bbs٤:mBkF:M;5iLfZhIXk`EL׫UF?lw{P'k{d/|AN,(;: V}4q_\WN3|t*N"bZls8:Aϊ}R8B_,\a6y`?^p>)VZrqqW9_`39*nS YKAuB{8b8K:N p='q= 4vC'͒ & )}{pNl [Ax8 \%.4F0-pI}U󨦏:,;pKsf MSR'Ei7%xt-d>غs"}]LN֐OpMN=D Q1C?lv(D{2)>hl-tpos@kwtb<0[Uv&S˙=AA@t6[D(Y"m1*UkPä)V b~{-cQݸ/gTȃx]-y'9]p!\bȲꢨi;q$u˲uY؈mZ[B΢.r1H¶* = lFiXҁίTڹt'q=PzyA;V}NwSq9u0[t* ,B7Ve5Pk|c9Ȋ2^ˤ۲$@|sJVOC($u 0kf+ ȇ'!u(:A̜|Shul=یӥقXg{ZP[ʺ-eʞF({$l B.z=Pz[w ߞPi$߂РKhO#4x$]B- '4|# 4mAh=Ihtkĉs%LB:Vy^\͉lC7uw)ټ8d&T0p̫k1z꒫|yQ$=/sdRIӔ!A/e8VyY/ eVo׶Dgֿ!eC{ tȧљ?g>À$}U?)YWH+N8ʼ?%%:_N b8~"{փDgT5iMT}  l43ԝH.uFPEOOyc5(fCT- <I s AxշxǶ,I1K2N*@ m4zj}э)xhb pZ;։$U*iWLJ4OnZŰbPB*(yu[7Gm߭xEzD _U_O-bcM.ykŦ9zcҦ ΓKOX5pqVִ~/~z#yuRp!HE&|6ktH+h˱|d:$}CL6OIĽ7ꢘbX" 2RÆoAEu43H}˫E2$ӕ_Dędj˦aRtD<64LR4% ޱ O1N I4:Bql}~dߺ>X[3 BRLOZA#͔"`U}*`5pLxxn; Xc?R PEY6 [qÀS[Dmvɝ"ړ-u3'X[.Um+퍬l(bO-0W7>Ɣ k`Z8V(v_0nW^ndqɧD9(˦![ka]v?sM\" S8F%eB-" !%]TKaȅp&2v!9 ts JZ7E ;хy)WK:DA:&oƵhG+nrh\,hM0ŭx RvÈv#K*b7Gjw~!bm,|d}5(akq r+z츨3a|JO8n|T,mMZ"6*3iGob.Lpf'LuיuF.v}+E 50q4P*2Co4ۉmEJM[wl5 Z6WhItCoˋб2y3-ܙ~Y Zt>[E/>[vnI;@j' C ʋfc pCrŅ$}%* dVEebVx+ʕtҠo䨿vP>*RHuWEe Kkj@ N` J(ڈ.j92ֻ!'{[LR x?XV\ͯ2CW7=T}.I%qI7YЂqlhn"h$ʙ8cԂ^'GЁ4ig-!݄'> 2]XܺVs{j Wo>ꚭ[](['ҧHcX&Yq}G_^6(B<ӼC;W;xi+:!GPh,MȲ `T<[Gq p=^( z2rS"J 2zU ǫZk5e]RqM_^Dbf%?.JN7 csw/6]J.&tZ/Z "=]Px絎ܢ>]j+*[guoIbק2 ݖӨp:zO>P5gRmRL`7Eb5jyуX:fXT[ht7g zdN$+g5gI>&#8yXE8W̡yi#Yr ܲ&dfWS]jSGz&Xn& yvECz1D#0€ acR>+lb6?NPrYbbv@YϖIW.w^TWHM*xXޝ##a6Ω%"~*sB3o=ԍPr򮵷$c>O~ 0 HE~ʴ(Q&)O%)^^w3[#=G=NRU?`/u"gKޣYدS ?_mZ' Ѧ{靕N_x'.%[ǟ΃W׼SlCoJ[NIGlIoL`]g3K%DYNGQgm:-H/.Kl~T?o-/w?<KR~w$Poo_oMٗUM8Eǃ`IBmN\4MV~zO-m>xlWX4}ț<)R~!Lf۽9s<Vdu|qb/IZ ٧+jIгt`(a{MhpkEȡMz-w3}KyYƫu3 f Z( /P&k@832F/BQn =ns`1eΤЀ$#F*@ѶXg3K8u]ptc385E{L /ܘYbib C8PTZT-B xZf) 6>B\VӈKKgIB rBXL6 /Zd"@sdX/xhV+7$nXCL5%FSum+֖o"e" K(NGV4ϯt>HϸcBlINB w4hXH\;pd"ux%OH0%c# ̺@hUqG y}삣. Vk-|ne+D R>R -fX*I}^< }gCׇkLx<byu!T"OK?z}c;sf5B7\ 1C7bK|c.;[xFj=4х+\XG&6>azMU0 23Rtt\ᲒTRDl0UFXSaF *B4!D]IVX$~| B_Յ2#P iwq05 #z$ kbqFJ;)8&Ӝg_Q^9<$:bxah`v&R,u1<\JB1)W]pc³JbMOi a wxbTX.R"cCPdtޥ#pƥA#XLU2XVyc[;f f )&鏝.YYwcW6830 MAU1RxA Q1Eb > x%.ǙzE|.#uE7d|~jm ;ҰMI;7)9]P1w^1 σK*2-e>.,#M̙cSct4BTrֻ6Z7rM`M $iAv7n^>yD{ +c@cvb:jQD*҉QJ\u!/y_U/cRKח}s`]Ԅ,Rkǵdd5$qBQA vvm0pٵ[3 L€X&6S,]5؏EhJއ?\Mz^Xx]M44ڠ3S'# 秸'4!xd{gh=\/xL ),̸D>LNa nꗖփ3\C%2ȔWטP)|f^GE$简Bbu\\JKyή1^Q G IU ,U?aG.ß헕6\@~}yKO]<@;84Ƒݿ.q1!B-JA :q:Uz+l)-]W]pR3 %5yܥaU._əOpŒq]Z'+J%U?x>+һ佸7wnkb^dɫr_jlFTwl0wi@Y6+K)ek-WxAn$那(.0.81)"s n9 .X*=vIc4/ClusTj'>f9{2ʀUyӇ@?k4z$QP5 .\"gF0[l/,xEÀE)ÃhjRO~a)[>8)'e;.A;H,A02pBBrR1MJOE n (qQS΀+Y6LbS̴~`hD(wvixdy^v}`U7dwAnd\+uht(~}=p^C.[m^+RH$GeR,]n\7Vr}UY roJ9QmW*㉦X31KCtV?wffW=ts_d݇.4cȦ䚪)^1Ģә(L:r ^]\M v? r39z w7YO¤TJ,odyP̓ǷtOumM(Nq`c r :ᬺ@K|Q,uj0K}ir:(Iop`kx̫.--UIΜ*Iu 64_] R\Yi܌'UM9+Tǖ.'Ω]pb72i*uHX=v4K3cS4gp L'%~RpR ]ɺǂn s9,?q]BH& "7E ָ]ʲ>HX?tdڹ{.㟶)f,(Q!'!PbtINy's B0Ke-8I֢#Q N.%W{Y>x$ghἍc qQ)|q85cKp87]t,y +@4)<,Z`hasġK#XqvpҬ9X&e.o;GKz۾n?i[mpK ^`E!p,"=F8:h.-Ct@V~Sؾ]Cx=ܽx0*$c!3[lIۀZYktPQTUE!9ۛ&]{EqX֧ J{ck4?t{GX0/c߹ɺ:(WZikD8XJ>宷F^ dWK>6_ϛޱp[U;;Y0hrΨ]mK=3 6 6*]p'қqONS 柨l['8n;Ǯkleڍ_HQ|u^.8JwUSIge0EI.zm]t5 #Vtnpq)I2H聥g5%ik̘u"ʸCqb=fJ}F9?Qu ux-ܥO'S G -.G81(Y>9yU%uUz.U_ɱQ&tnܚ*3D}WW;ߠFD*.~s[8!+Eǖe`b=y.Fhrs@t=^H{U:Y ~;X(U@k x}8XhciyvR BD낣sW54d ,=mXYۋ Uu٩;lJ'oWæmZ <ĒE5h(tgCfK9KLwI׍(s+/d#4ɖu L|9p8b1u HE[ZYԈm9rbp$Q5oԬJX`F̦i?64-[:p[4p6]y o9Ϩ=Jsծ~E1ʘA 8]K-#,1bFq;5(WeNp飽}PCN|RMDD ĄJF Yz?u6]_~)񰥨Qys^#1|g:QoHlUڂ8`aS# ,Zj- A;]i/M৉%/,(BH"'*ԑI!o[+Moe{糡y1wh+1 6Sp*4|-[JTٝ>`5FplJx$Ա\L71Bˈ6UMUii/jh|A)&6Lc?DzrH:U\".Uꞛ ujA|ѭjxT).0,5ވ w*kR8q;c|CJxyr@&#qԅ15V:XB3ohSRu6(6-W=uqTf:c}3MGp<׍8A1o~cݿq'J~mߤ뮚9Ckn.8~u&Ȏ* @r1' q`4)9t_!Č(8UReVL蘯am*pNR1ey+a# nYߢ Mе|,$0p0 )/7, өRQ`l-$HPkyzz;e8˜u*+T4S#Ϥl -2ʧ>g0P[GV3mZjZ.J,[J4vJrKP8YCttNKs;hEA_XN6Uk[l겦vYi>pDZdM,NLC1Vg=CF UbLh9+wUii#o6$9T" `o42!7Ha`S9no&&)  TViW~2g ,m u =kZeQ æ-P#\bX-GW]FlЖb-i'& 0\6xܭWh$0ܸޫחlw3_?:Ȧhgr7sx$%6j8ݕoz620LAK7 2Mj1/Bk!Lޜ xU/ ܭN0sd?{^y#|w3%?r8zn$mJwEeZŇwu%tُϮf2o~x!^̿~ڻp4wiiĂ刾pY=(Fz/Uj幆> ijC|y8x3)JX;+^CɒMLl:tdGއ[VG -_y%1,ȍ4ؖM+Q6 #X5)Z/# D-[d6Y9"?|N7 U-QJfģ%*xFq=/:q.s0 c 0:_M07A^{㬿kG .>a+8Ǒq\rY9hÛoYP𛥷Oxߜ^xST,;˳H0i:p pw9T73l;7}\* i?Mc<ɬgΒd4.&d~:_, w3'g>vtbz;QԇO*t tN9B2~s; bԎN0"'P7~sO0{`%02{oOoli zth I^0XHҢ#co!1kgYQΖl4?|G/@Y~z$'WBUjvB.]~ry {K_]>z>g]$˨~D`n1 QN2<[p0p…A\Rg A̹^|<Lo{&;ܴN6&W5Z1Dׅ+=O'xMW%WW?6W]{=_N1~?;M-H6L~P&gW֌$p=eЛ{/< _/aQxZ$;ɃzI*43g gcL?L8-h n'ݻ^P0/z1&MC<]go֟^&9>|'`'1L6ǾP"X.ޛ fX~+Orx:K>;'bڡXt';U+9!= Y}2ʓE&$7?ɴ$ [M^ٜb}d_WK~%:  * wF]:).LcbG[G".k2)|97֘I+!0dbVOW RDs /Ǩ[5c Xt!fD73vo@>P,*AcS!N˻CAZѝD僻ȫdX3TTNC.O_Z|\De8&y 7$Z 4k(FS8%uC6:D?w^`!:6ȠgEL_.|||OOa ^R:L=w\׏3kBI"A1QlTHE"ьk~OmVkw~x_QLERxo=M1g ..\6𛡯?߾}. (b[Ï*P=:`u0_aa;F2\y=orE9OKꃚi/_[3-e,ܫwIi  6 #[~q Ǜ\~lѥCg6*8ۼe3Oߍ'ga>SpFeHDN4,K"#EUiÑ#HDZIXEFbݗװ J!?el+*[F̩&vة(Q1&6 9 ̈Db̭5lu_T2鞍ޡ;qDSS{Pn#%sH)be1!v-LP4V`qŜ p eu_tːt0-{ 8ac+@(K#Is 3V뽨~20zuuV=tl`Aa*.3e&wSq/n3̙~pɝUΈbIL-2Ah3zi9}%\Qς0)~6!I]?t'cgOG|>9GEo..5Еsy -!9c^NA^]n:)^ *Bʽ]r|}ˉmxCcJҔNҫqvKZ2͙F AYi~{ gh' =U8$xr$fC#OWȓRtjy.~ۋYH-|&񘚘$q*ƑHbLP EShl@%e iwVz2wy7?F{X= cz϶aR@M y6"(^U*% kɕ:YZPݡYX"dm~CZ Sv\B_ukۖ*5) D&1bԘ;c$cZ#l3PdnNhZYZCڶ86s&bK9j0:YLEie`X羥T kSDIg{jΪllWb-QZu W |t Mю14X04W04E5GИ(Un։$dر5vv?{F>7 ŧp7;av1`w'e$' _[[n-jE~_=X$uJBF?I 0*%Q?!uXӖCn25ɣZ s]Ed' 2*6J䠙l-dR(əCr闶T< zS>y@ѽe$Bk H8z80H5YbBRm{ " sҨ@q=J- JʐFf×X5PrZ)Ks }}ĕ_4tJiwڨq[TL Cdf#V`)5"3PXkw|*Π/i)gD%:-KmYByǭPQ=̌D\F9ΰۈxTŐBV0 g1\LRDZ,RB5P:) 𲣪jO;ʖtvM.8^NϽ vD~se+A hYAS w4u PHBL#9Ș7a)ʙnSLC>R;vmH%H ᝊTڨQ7ޗ[+A3;_I%(͘x &RB̚L4{]9O #D*3O E&w:5E ‰v}k%f()웰 O-}ên#ۤ}!Qv |{ň*W[ Z]E/ӫ%)9O_:m᤭4 MDBh%d,BO.e҉an;X&ˈ vY/!:H(l#D{{W1@Cior#E!9,BY>\^\[Gb[Ƞy^aWVG+$.S:;>[jm+E_V huN X|vqwZD:MDCӆq} σ?k8i%R`ѷNܞKMoպ-$Y۸mz`ymc2}Eńp$][ lzިR\sv [P~vI6TR]j#jl=ѵx_7m4[βxrk'p߇ʷWL.])|N2PO>zPnw3d=Pq 3H\T,!*3 eb(i>H Ovmp^-^I fvyv_6n[ar {d #iԢ yOdlbzw;h)6#QA4<;ޜGHZAvR ; ukdJގ}n%u+{U䆷rj^CSٝZu~R]l^}֕ MCY᥆ -6Ξ/ƭ[^j|yϟ%G,n˛bl6b|h{y$h6]Lc'g7*)뀈2I} =wwM3 Oafϧ. 8| ZB(>N챸>>y482$w懍/Od>ݫ38gs2Lg;=6ONN+{Qy ]uC1,cn>xt6`{ Az֩P//jsrn3u l\GٗK.hz=3]8LUp/=0$׋Kr,V||GUC`(yAZX_MikZ]3,>it~r^]jG\LxwrJE1CpCm|G[yBcLMs`![?+$uF~n:(GQz¼{7prϣ7H"Fy馗1ZU@'>/|w6{Zɍ_nY+nn+x3]!7w,YiuVzÀvz}(ѳmSKKv4j^i2p wOQ0@}! |&ABDHoz>}FjbY?-'C?*B_ԗo]TIx$Kju\m.k8 9k}l ;k7=`𾁒\23ݢlW$LFtW& }겏~Fsq9vexzkuoUɚ=jgP*)-f-jU KԝO*7%m`$/Lʄs@j{7\\4bbvYa=K-`{o'eζ[kIYkqM0 [Ξح(( xAN&~,5e[5Q6i:7rB.х,:~c}t1Ar;Oߒj藧Ik,yƆG顺}0+$}NYL'KZ_[y.XQFܼ7(F>Z6FG y٧jn9 :uWj5GnʕkkqjsԮb U&)\pѷNzzUNZ]%Ԧ>bRkŀߗkURJ<[|vUoVnIs_<Ģ U_5 0&g))dIh=KcpRП'sXN>?Mm23w/;_D\~}h]8e)sHxM]d oWWw?7nqx^ Ia>#t,]=qe&ߕ KT==?8] ZAo5<,%< z>܌J^ ggə8 Iy37>.Z:b4wG~6;\ -Qsoy+JW4zf:<2շиaػ7) "}O6v 26nkrE,{"U i`AbV; &F\myk_䆳V[,%U @+eYEcMEۘPeF=:JQ'gaº73!o>Fj7zTz9=o)WT >x7>7\o db@jDm=xy [j8gZ.*G ;^5V=X"=(=x7ׂ GoB v@M=xzPf9E&49{T->x7F(zLPLt0A`!SlTHܰA=8 VnCUP܎,!_) J Ic8+# =4F@)zcI !2325 %߂S,A1s(=ĥQƥR^ l{S! |Gy 5 Hy(=RZk I^".MMH=xQzm5B {R+TԷtܼv "%y^CMTR 3'f>&2י_^|f74_Fzg2: W~蟗K8i/S_^zOϳObNu>:.8 g s9Eέ_E.dE3l,=8/Y If|-Rԥ5w#_׍$Iؚ{gou@dԗzk~S<;/!s*?NLk-bə^ %˴l8t/Bu{ǡZ%R=R7B#k'װIFQI.K!Z PUZʢ6Ʀ. ?',ݓfP"H-_: D6TF*r^=g?sjoVo!W9Ugբ vXB+Mk3'z猰N> "h!6?VQ( (-Q>^bWE%j+{ƯSmृ @2|jBeTX8/-U8*8rR(+v@PZκJW-oՑSS;L Wq*,}:/VY$Ko('e K]5IW4%P|TqAP0V?[zKe$~U(S4`$O)Gc!*FFC-9{5C*<*qq_73{h|ҹ m9ANSΩiH$V\GԵV9su.C蘓]"gO!FC 'y.HtU9)& jడnykd4Im_JRuA%`"0R^#Q2=7uHscݗ}~roQix䗽))=~\p +c[WRh@a6!^/B8$+E!9d ,[#dPi32X2Gz8b3G爿'`zv wdXo m0+\T2@YO JTZ9tm(‚fx^ޢ@F%y% W#β/$].1 W[N#QD^#!I1piue*Ց7'dVv'&Llu`UܕU.|qҗ,[Tj#F#}rm6jsmd[8`D#[lQ|¡KTqcKʸ}=0è#A0m8-5^#c p.g<4#]Ǔ&C3 ȵXF$ d6rFKrehmyjb:K$xE@ymw2I'R?t )kZqj'50`d4teIL**9m&'O'0 "+ P qap^p"8 bcPPdngd4PJ Exd<&?). ءO2MZ-g^>f~g>@ oz@~;L#IDE{a _A70Lc[md&ҋ`d<,D9ojޢTY@yiK6yѰ%,'7u|'K|@CHm\鼏*8Ǡ+\%QB[l`բk$)_w6d0)mkcILt@\y o~EJocr0*gR!$C,^9"{5"cT#CGd;ԺYa,)m"pz8F0iy .deeR!} ߴO EN028f!E!#d3̮nӰ4Yz/eSQ)cI:JX)hPXyyJS*sxuj+fֵIriQTvERs)蔺x몁!TN.cj1(S *O02GPǤ5d'&Qd1V[LF))`8!/HmƺjkFvR+X%L$)H9IB ,{#cQ/$KEрh&5آZR{玶&W+%~y޵p`PXu(T}$0%xB2%XYpx)LIP2D"V '(9h—#V+T%N %d˯MDŽ% a`YYI:b!(,dڠRd*37280 Z.8Q*Pk2 gZEdd49N8^ kYNs0ɖbHh$l%lЊೄ/|Cg&LDm$#L1%+ ʮ.r-PBBk+˯x2I'[gMs^&k%M'RUz]R$,Hx$mG,iOB~M?sv͜X:>Dj T@BVᗩ PZ+>La6F$$qhBZ&AJW'Gfx;EjrZTR()|/V߯4>_^˰khMz^ΟmOB<{|_G~|&aL<7bvy]X/3bt*Ik=?M'q,d/]mt*4)2Z)4I9>u X>=@Z{דUݷQ4ޖw/LaLwӾӧ{n ?>&%ߣ{6}C.4ߥ(o69l@feEv ׷7ޜ#g8*lQ䮉9H:(S쇦GFFpYuĂy䓠?jZy`HLJ@ { ?*ӏChFb+QI'`dkf)r?1SUD׻I F#eBɒ3~"5*,"5۳az̿$Mb1_򆟗gߝOb1v9EĊ gUX_,2!CYy׋eZ̚-?shr?]No>qa3Sŧ͟Ø0=|#T36kNAKۤ{kiFV⊯gd/%1WYb+ #zWaO}txzLE,iR* {Vv-A7śRvҩ(.O:,[?X!hU8ht&OK8=؛~5c?-O~ᇡ*?혿&? >0.gm-?+>s9I WbʴpbN hX9JgD!Lmitz(X"]80`(*}V 2~O4Mp7#JӶ0f1;[;Rۊ~/ath7~e++Y#к.ث-{v+ )sq-X8iZf˞5l;k;үfLڗZ~(R mHLiI*'gNml 0zw B"ji2<"\Q߳qM؀=;|p염@:g1}VCc!s@IT2÷{|<>U]hcھBx|Z=]=` 3޶Ƿ-_c:GhJKbt %cÚ\Yeun{"* NtˎIU/[M;4+0rD|{We_FeC{zVF}War}[^X)WΣ+f9_w3ݜi3o[ZZ|zNeì1U2Yd~x/aϙ/u+_Nj|Ӧ]c8Ϯ͋os}Cz#<'~SC`kwwLݟϟM&P_f ?/jB#(pC3U*(=Y ʘu0 D mw4^ww4/OW%{SGMo3ݫLW"O5?:˵wڽ}׷w K◓FE` ~e&O#"Yތ/vxAK*Ç?U<л|nFN#Vx[3Kwe?_Ko?0vob.x}`km#I_G.~0rY[Ca6z)y9仧6l>3WWEZ⏗j16 t69yүwu_䷮ճrR/ۛR|t='A\(l&4_8Oċ\\>%u=ш:S)"cZp:ѿ8h*m!LU)$(L9P^u!YV";yBƣ$v)0$uKc]\+uAaB~}>Re~Gi%WGkb$dzpZísmy00x\Lߜn1< c[,6ށV V  T7@̓F Sn .cBC<0eyQ^6~z'Ly2ej_>e P}6_ ]]OG!6)ٯ$M/ ?̳o}O9< TnkJȻxǬ\tˑ(S{IEƌE*\v3_釟~>fŬ /w" 7oQƻd|ҽD;Ua/EɃ5w8)Bϗ'bSO_-Gkvm {2Lj2>+Zo6O1!*B6rmukDX|7ޏ&gV[S&1^L0yw<íôdfg"?lR`+e.x3r#){ƟYyg5YXNͻ¿cbhؼ.Gl"'MRȽcM`q~x*WIPm˱%M:4+޼`?bR04x@YHI8nI 3J ٲEmͪ l$y#O"`ӱ'nEBqFe]˝e2ilpeoC) AXuouHW$¢v[蕞Q§j۞*rUlXT,!b0eA:S`s? &}Q? >gD7jY2;ڛNT<mE{Pf #[H׶pHxمOCB1odISm༌#}q[ -cGOK2a¼e+8-@INIu^' бIvl 4N iE: L1߀N¤ gƀ:C脍9 [mJ`e@Љ<#}i94KjbZZ-") b1Eziy(GcLht3(ҟA]oLiըۨV)"*%U8`vQ֎RM5n+5/!(`ݘc6xq;AV(rԞ\j*4 M7GcTE(eFȴJh<ʔ d) ǎx$TE[O* ҧDzX:!S)t۹iM RukFZ$mmeE4Xm_Х̵̕c둌1C\׋e2;oo!3o0 }k|8+V8[dggҒg۝&𦯶caS Bt$iGU+Q:m& Zj7\Z9TiK ݑUT ͝oYD~\YlFoJjoK" }ͲF6f$fyr|כ}t2야OoeNyVR.ftw]X{7HDh5_cRx<\coƨ"<*-h[Oi󊕉;'K:'uQL@`(hPF>Y>f㡑疓qc@Ŭ@j3<'ɑ2hxA;aoî+6fEc=?~[^$8wo8)PB ʵYZi-ZSGuIR`JOR 'Zm8*B~͞1OyOhQD$)NRuIƺ$c]a%iD@K]2=K"{)7x롃_dsJw𮊬o|})c}kǫ})gx"W pA#)揦[oŴВ`Z;TJtJ ݺ [@++mt+eRAK1V]^k.fnT ފ_(./ "k BcϮ]Y TNnj)$0EִZXY7.DVtvJS>˶hY?Xvn'Ʈ$¥n&}YٍCV&5v YB}; XDcZ2n"Duͮxg45L]5Lt^? Z٭/l>O~kB;nL7&t!2"ѭ15FjDqSBRF)J}nE@A2M'oy3$cId)O1(>W‚E}_[HGQ<$%N:-vj0Q_wo̜Ecpu4I:huNKv Ђy6Fz(jc ;myii YpyX~d:Dy]k=ų4fdN ,R8dNb;cH~7$)S6I!SL<%`d) c;)qbZgStƦ9s`@莮LI1ƓӲ+`L+ ?|H vpf ih1U!1:&}QȰ&Ú)##l)0}Ǥ!Dӽ? 0ip;gFCcp))$7H-@ꕱ)`r-;WM߲s51P?ݰ6jof{7ٷ۷B1_4XKHj[SU$ Ce]H~CyQ:eSŻVi*@Tk6; d+fFk`z 󨩱uE@XsLn& 0ZIYкoC5Br&\6iJvwmr -Hg !ϥ Q"CY((֙^+ȪQԥ(T݅(|xCk@8= Kџ4Z}&E朒`ǜr&(2ߘD3!޼/3bZog4!lx*VRxU}oXٺE"S57\wy \jٵ\G=a\v%W/ae͜%/Qp [eݾ9ِH"9IN}\Kg 6lsR x3/7P^z1iWEgpsrԋW/y]ϲȵv,na5ǐgվlQv5?jud -fJ<dĸS$[!?-xQvݾzM=8G<64z4BL~d&z&QҠh9l& S}^OśH#܋VpHQ/i0I0QaIݭv["+7hMc_I'a,di hMCK7[.DExR}WF^ *:J BhyH`4ih]Dɉ3SH ceӉ~],ξUm盢o6`<4w| 9+m-akpo&`S>IE-`HaZ FH|:?^^9zJײy,5S-JUVݪ܂x9 ]7nӻfgg?8X9m._Vky_;k!cC5Y-)ݺgN'?6_=Is7?Z};i_^//h&~S}iA%8uߞ~5-Oɷ˛Jj2u s}Bz{k4tͅ,iӧ%SW?1}<Ι4硝U[O#Vvj %a×]ٵaoQ\ݬ ܿ WW_xpxX\_ kW/BY/<+ʊ{"lA9`Z51;Kz}q;wxɵ/Rk=iF3ɾ2+f*Y/-~Q2-{D#E%UB0a6w=r:'@؏k{n1i^Nd1;DH$nȐӼo]?*$ד+N1RMa3a, PAk0JXԵ*Q!ΔM ]/!)gU0i}A+ j*S*3'\U#eZ͕$0d&"#ʅrZQ`FoS6Odxj:SxvQVn炗[#EƟZh]YS6Uax2$%[̵J(c*r*ԅU!ʼ%.Deɟ*}sĽfO[Dj_)$J4lʺ`tF'2)݋Vi VQ\\Ju숒"/QV1VtpI^}f sfDV<tBy -0ҋP΋L\A̴TJ']n̕0e]Ue C %SIh{LL͜xD޵qdٿBbڪÀ`v2~XzZ%RKR=II%c7DEOWݺ:U{o0"dDp"HW"ć"ivDAz$Z3J:WS)m*R! t2yYȍA2!dVmھ0 ^+-ʶJaF[JJŽ:e1c\ 2|ȴ:)Gޙ#󨀊%:"T FcP$,F1 y(W6\Z$Sc(PWJ Ń"NYmL1yD"[WoJL3!OSRv s܌ڲP3("VUQ(c:Gm XZoG ($<1!$w2e3xJ9BmT֫m֠} y+,*j hhc內ZE$ 4ܨ`!0BlD4lYT4l0r!S)D(̈/=F€sA@AI"k'=< fs^qGk̔Z/ՔC XiΎW(ptNK J{F@ 9RG-9[Q4x8"PF$#ՃLA:4 t;S4= t5HPZTȑ '0W-c~ Nhʹwh"wCeډܒnUޠ}B,2$z)2@*& L52Fe\TЂx>9|&"dH*IFH+eyE3 >*d0G袩kڐiqw &" 86{$0.(.4QR{N{H3Æ28W$1Q+C=Nd )kh('w͸\2ZQ)1 /8c"@#e.P"p)Q+ 4cY A U0){EE\ʬ3HlϛRXGU9EdIУ\-=aJTR9)#Ux\Z'G"ҊAYu-PaQ4" 34YF_)aP`e(0یF3"9Zf yƦsŰʉn AAh" }T%*k8Ɗ"Qd$T dvDBӛ4=M:ɫcdZ*(g!=Q(x!Ij+z#%GZV3>j60EETTiW @,!%i? : KdGSYlB:;ͅL}[6>SV9*8B){-g[.!A.}l=_2X}6ٚOO{w{@kwC넱Oùby]ّnv~s}4RI͒4L Jșf5x\) \iP$K5$!ݠ^FH+v{rm^#Rl[?=X{ڨNw̶gwe/sZv2y9} 3ʶmu-7# 7kn.Kaq(GAR(F#B{~cǷ'/H?P_ީ^x׿f)`}&MsY`ЎUʞ XK1AL%fO%ϢYdbKq' H3IO\>ѩ_+o?RK,5J2@&ÿ".l_z={GoN޻uo>'۽[b\]@@o>1sVIRs^)Qx On?r 7!SQZRCN^KCo/"/߿;ݵv{=`AH7o۟|2[/V 0qogb:9EϫO{ﹸ:G8~g{1yxXѝӛ׶}1}׿O~a,uɴݘi}dB4ݟ~zb .- uz?n6#ض_ ݦEϏOP/MbܼZi˫}8|cyN?j]< YMUsgOz흄*.T\lV:?PόMXuOOK#h؀xѴtv޹ïg֟T2nFf~C ]E} TpXuj{.`1=0G|l_lk:P=Ǜ_T^o/EtD牨 S8r>;>G >FՈoK|-6:˘SvN2:t;hCοP>]&p?YBIn:Aqf_S^0򼨔myq~͑&Pt51wʄYJiډK/3N>IIwMB[^##æi_s@A[n{yqSqsknú]c:oO=ypp)eJK jS?}E(" ɘ;nL85[(ǔV87B6׮"iT;ᓇ{Tɝ*}s/C՛ :mGŪh&R-vv@!U1ݐaHEbIȷVP[W1onz+˳zBcwpxgV,lΜomk |3O)M[ɝt"VVqpIbA.vb9v\ TA<'e4\%z>g,&e菟` 2?RQ+<ʤoP{[WYՆ2ŨfUm1Fفj-VEǟ߬¿_Ath⬵)ȱJR6ߍR#{GҞTzGRac\zC3I}'=JقvdȳzG۪vW.tiGNp+m҃}Yy E{B &% !'@y-[:}BA'µ[x NIRr^ &W̧tPR G*0mC8ڐƗtfzdc4ux?x?g|ב>g wi܄|3x;WckNVV-`XA0G\ÿ�L_  7Q~6@ɷ7{%լ~_6Jt4 6R \R); #ٻS4ՇZR%ٴqЙvwL+$pC >2͇1x1o{R 55 eY)YLɘȹv"p)^{7ЭfDLt<#Kl%yTŅbnU̱RzD@x.b%~g%Yy3>I&|iKVa:ޫq8/l34,] : qmBkˀ"=SBĶ&}-"OjL="b *=Mi"aVqree\r뿃-jW=_=*%iKqUwܰ<]AQԈ028&8F(-#K}$3‰$ՔqE|Ϛ O[_ݚ$-y$oBlMV7:ۡt䩇]VtcaIH@y,B Q(}d ´ˆJ_Dag$Q:a5݆kwm]-f믵Y$ bavкڼ8^UR59F[fui[H.Ura1/m{YqlWۍ|p۹sn 4kBr4'N 'p$ZfW-l*Mk! +Fxf (҅%~m]iRE  \a>fd̸RIaJccM9^PRR^%oDP܈8A/Q}FTv~Es9W4݇SLYOJrA~.#{NF^ОӮd5]ޮ0#eLxM65ILvdLvբlv0.hv9/a#3g"ioE,sDƏ05Fs|c1aP)",~CiL@%nyd邏W+g Mk]6Zg0$c$c4|6$@D9E;n 0 E'/ܵփ}O)ąJkqc*X4Jᰮqfgbر~>xf#jIe$?-Y^̓]ƥ8sdPTrp]٥Fv]jd٥Fv/3X*>KOP<};N`g0|y=|5Y}>!s[ {Ú5Z|{0wzf\F03)JðQ2F%F8 e2$7Ō-.]yZr]8,.=Xf?ώSa!w![Ӡ-0A\3c??_-0̊'Sɯqΰl.P3BޮǙW"gx.,QpAiFňfC輰*u㙞Oo:ˮ:K͔~\e?*=&[Yݭ='.WwwQ8Hbu4AvlWS}&pLC/,ʮgࡲ(_/9_2'7Nn.Sn8*?(#M/ML}E HH+JC ۝ |3+Yw9Tj0zy}g+"ˋ;ɬL'N0O0w4k'5r5zfrBqf 9퍩^]l;v+=֩'m]qxPЎ:nge:/_𒹾7+]*sV*ͽP .IdeQu2hߑQ.,>=ʷo6ڡ,9Rv(-`2 ۡ`]ҳgiKgﵟǑʠLLxHH*8Q4TlyiOnRi8u*GC$ ;@28CIg_\̆zӷ [rW8t_<_ti{{(Վݜ"z'm=%|k&AXW7,B_E?Rq`|QgF&Fa0F~zt]wi2*>]a 3x?Z9>e?oHM7{o}ǐn%ʑ#4Fb@t5˲,?ҡ[rvbA <+XXxQ2ku/"I&;_as|3ҡhc!QNௐ@3}mx!2Dq`,27X3 ,?t:i*~fÛH X}6D.ClG-Gbxgj15TP4@2J@& 3:wJvfP9IoXm`|Jtk͈/=#D{TqX(běTRfێ8㽍8lGdžL6x9R燆yE'X+c#5p6+v8KGν`c^6Ozшkڸd|CN':~l> `PQ !3/yeAeQC>R/j/ Z j&M}I^a$`W{1M. sm1G:>j5bJe4Hg3(#|#ob|]NҨxaq <   < q*G\-빹ץanV0?|HU[‡Ydě3Sci@ * q$Յ/nVP,V:~ R?dRF "{!12!|fȫicBQF!>A{[L̬k%f<= !æ =Nh`{ (q+H/>) oAЧG34uPAh fuv;{ģE LnOQs撿Hd 7I4Npq )cz+乞^a яj o!9|8iE}}Oܧ7N^?ߛjrP8(7<F߾Z]{0 i/?~]>gidض mYKq>nS^DJDc+ Ax /Hb*f WX{)b,'EFϴ::J`|g >Z 5a\0H{",1=rFvο_.0_P "q6sI/:AB* ^ƔJHFR1Ggl~D>U\aۘ;IӀs$!xl=z@|kn ef1'ϯiHFQ&g*W cGʛm]hTm*0y۞$2CƼ3%ok I3-,5|%#b021lV%/cH>6[);S12ɔW#̮ `ۯȧZb^y.Jsףf!4 #YEةyqnErPN Iű({7dUS9|ʩ,I>oCS9X^1sm˭yTK9qDS{ӂ8̔j:c.,dUS9|%)LRxl=A-Iq41ʎbh? Oծs_ǀDt)#s:"Y]TǷ+O뗧@SUkG* Pp@FA<]&hkQW\29BєLO-}Y5!ϗNlGɄRme/T!bb=#vi$~ "TmJͺABUsQ5/l3?Lo?zQ`׌r0YJ[x $:Ta#I?G*Td\3j>gaâAOe{[O;\2Ƹmi/!dTטs%O<)ɵw-50Vѣ谑N'Kuxѭr*OU/9QkPVk F/;o^7GBq(ok<5ȧjW!7?FTBd:Iߟ#1$&&\2N<iTk1_ ÆJ^8ǷZa$3"=sHD>Q+=-|5Ȩ`&DŤ 2#D 8R!ZQ[l^>bڀԦsVXǐWW%fUywB?8(qOUF$\04$gCinBrx5uO jrޢs7V#Tswuͥi[ћb:S1"{N^\B28o2Xf%!btɼ ;T%8f`Ǖmj@aqu#1{/)u\O0zeyxR^2;͹˙q "yQ=Lj۫!J@e^I i=!͑QuC4q|vӇxVlZND5>flHK qZX7dQ#hNgd6s]xȧ*uuUa`H3! jydHmLyc!;;"E"m)g  bq}9bcИ|j5 $t ȊA%Y"JʐG'"@SK dրki>#vxXǞ |[9ֲRFN~D>U”;|V1iœ!aHV U3Y=^- _sÜx~J49Ȩ,fD:nﰰ+򵰛8Lp"'OUx bG9'\ٟvlo6B)ylGt!ΛktB!etiFިtCLC@S.ŌS6c> CBv*zm1c1WknS W>bˊ輦b˱HcQ`hLD+]te"f9cq^v3*$ι96OS%sv>V6Geȧ|eVU%e63QB- WGcbvyv" 5Ȯ1]_OU]дnyqȐƈS+mՌu 5l76jFAx|ZoƝuPशExٍ?NgU3D V$[⏱,<;6aSsڮj:jE98HԢ + iaLs yOU.wܘo#nQ6^TZ{Qs5#$v?X8`Qn9Æ5 ivƶfT;(hURA/Zy 'jj4PmLmYCSyIhdMU3:>88Ыa{ㄹ̦:_o5k[y)>3;c[q+y4GkTܬOB?rݔKgWρ!g%pLD r9EyKΤr D>U)=AE8M(xrNH$hƵ D>Q͊OBˠu/{nO6U9,B'aT]s qOqcr $V(cxOӮ r0V'M`]O!sT:Anܼ] )fN]>վ*Wi/g_n~_ls\ùR8Y xnf/Or:4/V;}b߾.3HcNgDkAwYL[65,~&0iT;X«xBз OMoGEb b6yܛv뻠; "x^=gp1gBhG7 >j՞;L"j1Lߩ|uyҷ& Z6y-ACoYg[?td-áɑ`4h (xHbL3g:r>?=тy' B52{|y a݀-.M~XU.cN)myIw"DFgE/_y:& M܉`3f !D!m'rR1a\9h֒Ƅ*m`>&yDB ׸TT .fݚrl4{L y !Bl:W ۮ(_%!_"g=]*2h͕K^xO}&@ɚb޿=0JtDBp#Y>/o-d9 IsxDjhfpU\>0UX0ӕsX4i\KdŻQEyN-!SU{m 90R+aL "kJ}aQ`/Ȏ4S9 BZuj#7Q7I@) [/94ː~i/p>zZCk#ؔ*|fT@M2L V F)c$)< yYUcs .!1\B> )RWi7I1,ţ}'Ltz`LΘ !c2B Ψ M9N:I=d)L$ZNJm- +:E1h>L"*$5x>Zh(Ƃ 27k1wJD[Am[OIČ2ﮛP׆Ñ{.}_^/Ck#J?L4h;Tu ]|8#,g7#墛2 ]VKOKیjgfPCK_\w׸֭-JPRj'8¬7F>m(Īм0CFxʻ1")j-y}R } <Y!Q W,Iu!N {Þ"V8rw)cQF -5/NgmnF򯸦"' LUj*6&wmŲ=3[߯)eIv5#7P7nBk[g(ƺКeLqiix4)4 GK`КKx)/8uyфGSfɊMhL']t1F~TzX"^t+1ɉg*XbJk:R;|t痑"Ԕ cK|.6 C'v~pTFXB` dNBP"ʈ}uHhGg {b0v@Y\L9!F;=1:э w˘Dfy‡*T哨ho@'|~p&VXme-WXrBFNBkqszQ (`,GeLdlw: 'U$gee 3|],Uql;>?+ʻEQXz{z]jˏg.{RysP>|O_Շ~mlZt~ǯZpں@OOaG3Q!\gu}鴔̇,C\Uۆ vTe;?Oц/Ky]6XN r5K /U0X@m~-_vU~0 <#iԟ$_vYWSn#xKq6Ū*P\^hz%_ϊso~M[dGI3KXk$!>}K?@)VVYOV+ͬ*6K-5&h/5'6AgBYB]}v/r_Gn > l@1բryśReYo:q#/XoK ;8niߵZ/\wݲ܇;͏P讪6姏Ͽ/d9~g'oS}JV`ut|E}din<2&9YJ#DEWÝt$F PG3*o93y17 N(#R6CĴEmejIr~^.Wjin!"!b/[X0G]H^)\: TZ-zh#!s<nj|fUnAq7>PRnu"pVGO&s,ʮ9P-X~D9Y>"z6FUpCvHWm.ChmOY# GfTS'3˂ `QLq@zH.hdc5Ɓi]\H^L~%2fp8aDiYշ&r\M= I綸$-1{tƁJ'?LU]i=q99\ђ3G8$k|k_j 7<=z<Aɷ{{w >tS W!D5JK\' m5g>rexs6/"X23!3N8Zc>Z&#&PĹR,qJU8gX%i0ߧ8&3q3G_XEA1P1EZf2a].8#>#^T'pOlp$#9|(t9+$B>#'α+p8&l'"mBR:KeEq&jys!;PCIn孮Jf5XPI 9jj,м*N@jOSst0F@fT%ތ\+2tXoX'sʍ,ppt9($AWY>_Q쬢ԣ \4AIi ڝVbjT ר@%Jp1Of-tDLL 8:bO>9U 8 : 2BE ^1A#z:a*<3nCA*z >^.wq$]rѽd:=bp{]K.9FK_܍rtaF4WgT.әg֤ 3J4C0gl l5-wv;Ⱦ>t6+J̢/q ::]4WGgu8QWp mL&i]~ _',chyV0q/V?`mOPCOP ŮfeyqVHf-Q:L6yr0"ugy7k +]D CU{XmC>rc7f_3j3T'@ g}]V,1ufȃ#ݭo<ڳ3B%O39> J|F硓13BTrƝ왆c }bN4㩔w@ȭSVA91Kl;]Br.t*M: <)Ls$MZ:(')}njntGPRYRNC$)^0wJ<ČtLx'zHF~[.QθQ2P=2GftV͕1B(ҏqy uiuإ~ | usgwE9? ^:"XEWd/ )`9{]b~ | 3WG ^0Vo1odJh(_Q02&(lw3ffJ R֝aZq5r ?{ |(dT\T3ȨJw]x9-;Ap\J=.1G?>"]Þ6Np̷JZQ k?Nyw#R@|X 0Vi1RX;=G1:ѱeLQ.LS℻ {buE9Z/cZHp. *Qj*XN17!m8rY$2ዔWw $EQD0QG'". x܉\&jo<'B#\B.@rIp$p܁ciYdj?vj%iإ*,VU J1h3A̟>WC=꯮"]Y9/EbJz+g 3!G1Z' 5ʠM}|ըuSs݄(K'GUxUoo-8o w}c+/2 91 E[`c@͸UcRG_x(kTc"gm6m|'&k6(%t[zl˚?'-ui C2ܢ?VOp~b'Uu>CK7N\d# K;s,1Xunjpl"b {7@[+v+dq<;U^[ofj1#_W6ΞHGf WBPpڄ*Bd`)6VI,|E#pB曆2 mO&HO&ɸ<$_Kf.'(O(W"3ObKqx=絪)⽒ݚ[1#te,y^>k9),h MXUUh (@h:{BuȽ]'%^ᄾϛH&&\O fUD 356khTKJb9E{JԲF~Y:PE[JlɡWNUsG߃tEqµ4-rlY`xq8 2B8|ۆ0PaVC(RI>~C8KG)'=qn6i6v=jl^\{owglW(~~g93w SEčy{\/Jĉ}@qQ[*kPR_'%0\kW# P! X8BF%NW#}D-רZ" 矗x؄2JR&\S G&-ފ\Xk*xjŸDU"(W\Oѷ5oL|x>_>e" tCb['HԷ$,[׮gMlQXYFcw9+m"7'^$ %M9Յw0}XN<<({ڙP-6?m5'cV c12ȋan;^ߡv+ܡfl^e44Z&\ }+UKRᤧ͸[}N- pʹHlc5l!B\gyQqDrrPYiwlG,A{5fٜ'Fʩ.1/qy0&—Ǥ-Hs CF_j+6Wcc@5 @EuK;>\֦6yc `%,sV&.$LU8ֻq\d'}2䵄뿓hJhL |@k"hER[bفaJYw6,4u\s/1zu 2\*\D?*c+ƌ`-(Cga'Ywێwga a|XYUJq:~x7_ !mH1x"'ѳ4l]U$'dt*tw!Em)WNzVы]E 3gwSEs9ak[ Q91P{T-%1v|0V=uQOJO&(|9#'Pҫ\$ JCs)"WLxT`u0q=o!`*҅١? ͨ4 ~R1,t;䪿/oXrC8mwد9FЄ}klAs#0zakZ4rp$^WI6%VͥLxS<@C8O`.IбԨݹAQn%O٢?eCQj;?C'q Q䌈Dy7YC{a^|uΚtro,Y["jda K0Vl4w<2y0xw3Lz^;tp=9Vq$#W Zµ461͕B~g Ey" y8 pvq[pmSb1m `m) Kur}nNj´'-7'-1%B9Ng`b&È"F*WTs`ž66;<"D`|<2bX DIz;V5l tr1-:4%Gh6\bT8۬ȲղU :_GoP5-VWnX xvHYYY_b5ܡJ/޻ ۻ&oFn_ܶ4~y_~j Oc[#DT%A 8xw@Z$*gy_JW}7q`jua"0'>ݚ3ϳ¹X85cqP}) ԓgg}p S8[NVt~x MtYӲ #!B~nϷ~IOJl.݃ٶ~ƬFv.tx7މ~Hr6̪5xR +xxA]|Su'P8>,6j`&0V{9z`o1y>V2йQ-௦WVJ$)zC b+izYߧ߮"0 m<)|d7cʨ鸄uU<]N&{9-'v4U&3otT\du9oGOk-*'Eoӻ)l]迩LӮK|FfYþ^źt˧"x԰Wk>8n-jPtcm_5Y,+c6qkl vٳmi:.pIØjlbr~~~|;.uVͫ%ί#_޴mH8[jF0F`L$컀tf` gߍV[ѓ p|nTmg-lXfXZF?BInn2"bSoUZg dT)軰R`ɮ$ۣBYfʄy(SB5/f;6 t{Ao{k4F}Z 'qG$T5OF0X0Bv34q9epڜg1`T~c&8ymnR"Ʉݶ@mv %`Z+(~TK?m9Ƌ6( q $,LƊ i$HZO4j1!Q|Jz; Tvr4xUP0Ɠ& yr{NGo?3BŜc vٰ??rKw!ݡyJ[NXVlmozzAѻr5:)^]5GMAw#?($ЍNtCLq4n#'9r 7H N˙{]i:Վ4 UO],e QbtDcԧx.V񄁭2 nQuP0{grA . \L9eXN0 uC.C^8DڋE\>9z=ޱr}\Q\Yw9C{0-f ׌A{Ȝ1 ku˄< YTs>+_0;% 񡈨>]Wתf`~ [4^ҢyFxЂa*2HXڹk&sȆ"# |by7b0ޢ_9"%Iմ2E}w5N?Rvӯv }6c ߢ?j$KUY#J'u:HүHJTr"` O.@ia$%힫Bco*-RoI|BQIphQ|fdD8_*Rx&z5tAo]/TҦ2 I*fGrZm6bc% crB.wcW5WTӜgHMWʵv[7I}/dvQs+:J);NIψG ̘9ʝLIʤS!sgr3ZBz9 ^& izǽ3Dw#sKe2*M iiXYq1b_eu0贄։c?\;#; }*86ᑉrJaPpnGAZXPOxTM4^9~ T~459G_Q9[. lʯVﴋBDv1'/FYH\ ~g3᥮Jd ajQqaԚa{#.*_*mJfۦϯ%;sˏbH{J KLT5uyU^=-ze.7Sfi̹Ϊd #v9~Pj1Ơk'0EyF!\2y$8GLXK7g0ct~F<ŗs܅FXv jb&X:R|ti0LmC3Tj nj~8%~SBj3ףk=3, ktFcp/,$,ۇ$%$meIn3ldR6I$,]b]#82]Pt۱oǻ|oGpd&L]+{)ġq}wJh "1OPMuX>L͒geNuU4UB5u a8RTDFB<6^Z\"OIN8cY͠0N 4FS6hƐd&O/L## dLv]9s>g_¬̯1}{˛:Em[LANЧЂ#%;}>͒m:C:d St4](A&H|%нa+2ˎL Ety|6[+r=$389|>#PtfUkkN6}#3qTu&FĜ`yJD8ƫ`jRNln'dFٚ6)AU6cDk\S-(ƀ#3q(GN Ϋf';bG;D+;!=Gpd&NQ&nُFON&s]c!V V KA]ÕKH`0(KW2 yzqF02#~PIMPɤ +0d /i :-gZr.X;B m1qGAHzq oد|Y|;_h11b%6獿$ w^*梪vL qPx} OA*0+CN$8?3QTUwLIt8'1Bs_jB g]JRQRۊjU#1f yG,5AADEl7-ⴼ3 y)5[{хj:h?<޳ٯtZ N$j׿uܙz#82owzzB3Py8ed׫jpϟJ`LMo]d6W"}zJH8" ". 4{S85qP|8tI msqFpF#v}C(Wcs³eޱlV^[?|Rv-~:"|̈́t2H 8zR49N 3]=(?#3q46"G%vfQI-fHof~d/.8@5d*1Եs1JÌGf⨓ #T <=i$wnf"5~ 'Gpd&ç+\RE324#4ήĀ.AIUY(qHAzٽvzBdIPxjRd2q1t8)_ bp]߄7+=9E8 r_10YDc@'*A#82I G˙{W' (Tc2;v7XPV*0Ά}%jC#3q*? ˺,4\{ZYkYà&ϢL}[^;T۴PUtl9]ǞQ#UmjjJkF銅23!R$A ;;t~ʈH[&/~iO[*^Q/]\!f41cЙI`U߆^.f hns!hLhC'3#AuurЎ^<y#>u %"L-*U2S|~q9!:)8)@Ŧ$~8iO&&l۰WZJg&I}w) u,e-c>`U9:pd&Nݾ]zr.*4o@. goR8hdqU 6f67uޙ&ছ;2f IfY_n57y`l1$ɓ ]L)qtfj&5 U8}=_)AFpd&N4B~nn|.C!̤ɐd.ky=ČpXTĄC8"8ՄtdӀ#3q brJyZnm:3$A'7J*}ؗr=- N9 UoaMgV?L^}|0ͧx2)8 4\?I죧?p+b)dP$8V_'4M)L ~1_8BrGfGբ͔DiQ<#3qdӿbJ$XnJ$BWF# :L}}P2O -b;VJO" Fo$eu a#82G̦4(捗ToHv#[b(-nǿw&lc]b#o?FL5hH쀜9\em#`bIQ|g}k`JO ̾+[29hyѬϋ;]8Ou;u:%Ý1H~ܭ!'B'T'dj_=vP ^/ j߂9(+} MM++dPւ0󨷄_/@յ1y؝gC0wFf;lA|81P9U(c'cjB"1j 2BC1J^ZnAZr%e7Aurp=@Ǹw9EP\q &m L e) OB۬h14yG,t?{wYh;fxUPe"/~ -ҁoL;3>{TJGY<|j+vbY<ƾO9U}ab φ+['բ罗xy/n{,cKTR vfSy`hCԸ:q y8n~J#ô43_nYc!hˎSfV wPJTVر ifAk!qY<ƨ}>x!n6F iks 6[)-Q̟i% w-?.*Bh)pӔN c -r[eެ.nx.$\ 5ȋߣhٵDa 9hf8Y%b+}PFә`Dsg"Om/:x~$jRPyC&s~}n|X(ouGc_H CLفW \1]}I:֠;fhsmV/~7j/1U5<|}?d@jU-Z_hii=- gLPHu#$idɷoj/B2vgux$د=gu >oc񋳻KvےʓaVODwEWwj@O`r4kEMYjmWԕsT* &({' N_[Ti٧S*7 ]BӨzuGg[̫+St3/emq)iTxo֟W|ҒѸgQw/@vA;$滯TοK&kT{YWU A؜p`YaqB 7Sb-q'r9|.t([w#7˅}3P'og@<KܸZWM}C.bԍ"+nq-\<6T4 v|e$UA(QQZVʭc05AUx_5"ְvb&cd_HjG5R+uUۚ)aZ`j\[-Uf"yk\26&i$훤唴Z2E{{k>VUG!38}»TZ|Na{, H79>ߌ?F6]q1*?Wn"G96.~L-qոF TxTEk*-xjVTd.OXv_-=`XpW ]G~J}V1$;š3wY[ "F6E1E9iy8PEnhJlfZey| # xI e*dً&U 5`O {bepAm1AzJj$XB{7fgh¹8gG2)?c_4r%%(I K[x1z>~,7ߒIq.@w_ rbmMvjtKt^k\匜ד(."\v6̥ -1 7u]O>u6.rl@积NW96?q(, g;GK"xejdU9h4VQqm tPT)ϡ . SwC#-ޓ`2ƑI)7ta`4{Qح(+bVi^k6~[1.1 M§o sX F-7N儌9-P!7cg @g=O="*"LFD33mӌG Ͱ%12h5Oמ"IIǝ,64r+慡 n Yd]4hł|H~/~^z=|1lA(V^# E瓂z7Q1jٚV<@pŢ-8ϠXz~pbƝ+$4οm(|p3hEpNWSW0wvm_a+u{%3YmsA@<:12}=L"6%U:]j|p6^/n?KNtZALj>gVoc(Ű@P.7F3T- 9i꫑g:{r=:}"c;z9. Bx:9yfḇ>B[f};gIKG7ܮ=g]O"쵔"2d1,s~{B%qgtQ*A7S"0fllD+[(o6YhsmXW칭Ƴ gI#=Lbl(EMl2?.`:d$e8*.ɹ. EF ҙݬhS֍*E9P˨ QBeQ1Xi?8yg'q}{e{Q?@ M%& _&zxΛ!<)͏]i~IbegӀ2%+ !($9").O:׶M4G˿~ |p#5׽T ) ^&x CG$W 0&8G soCF~ $bGGJ?wU6.XВ6xWB Y-/>ƈo1=uT-BfyMe5ٮctXBn\[?VxCjƢUEmKB.Uje7Ca;{e{p q凋VUVւZiRJ0V MƉRp3̐2;Ϟꢕ\dۛZ[ bA Lq바A21(fm$#>X.$#v&^x, ͜1a}q,Xɝ_*c5?NP숓"NQERrHFI boXĉ'ytz rrJ%ځ:,R:N@_a`iE`=nըn1@# AoY*{?o⿘)&'z˿$s2TİGωUDkr)RucܥXr0cRjehLs-U " +W)6ԧ e"u)QXeFuH'J(.Ai6T^hXQTtB8tɛśxO^~GU gfhu#(!X;j3Cl&[$8s\,8O#t<E !IdHFL rP ]JZ;K.tǧqNٍ\^>xzupAc6ʴR^_u5(J ahɨ@g:kNhe7o_4|(y?PW"-KQڀedPbیJI@ dA`֋ze*۝÷p)yPc R)F  y"$Y՚e[qͽVsgѪuBA<88ej WiL I3:v#W{U򪷱V{ozqh56_4\W0VFv[hk,f~sgk%݌)`&Ap6 kBM1N#Y:_& wUL\,XF ^|q #k!S!9Yc Hෲ|y؆ ~p{*yv뭸Gvwhܙ[[M`EvR AW,h WA;_[lZj  2nVZchذpеL (QF˵mn e#G6Ef&`H>xOb Ksl3/d\K1JR%9dαlY/<'.4ɋfXz;Ѿ0GPa!WPv'%eAD̃͊-6NO5h+ :veSg}.c},N";.gW(gyK^t.tg[|:,]P@i=X0a4{v w;x/΢ʟ G@f_󕟛_rs嗺ۍ;f/7_cf{Z#'35ir:7_/DX4މN/zj{q?avx@R-b9ЀԒ둟rZ`Xi-g09ҟ(1É%GIFa=:̵>ÔG?x>a.' 'ЭOɘ"4V'kt+Adsrҗ#_W~_< .SyǿUqole&1X;pL;>=#0&mPh1!(?<}xfEӰ m26Bf,3dm@D`38sN2"D扗iI&&@d z yoS`R }^$tz3۪jI̒0K[ *LG9z#َ[ +Vr!\ u3wR|kЯѓ#reԙy2+1s?滽 ,n7 "H`?:3XTl$mJ K;y<@tk7oylʱ[3<miR#RvʿorPSF1l.͸qQV U>.`wO!KOқ0NH['II9ئL?= TJ '{pEy/EBq>Go^XDK}(Pd} Jx+8IeT:yZԨE&_ =l^^O|.e; )&NvcFPq}]I'^ AfE jNf(ɇ0p!z;9NE=T灟!ǃáK :_g2h_P?}mӷS‘ ͪ_:(x@g`y{tvboM{~[eSlELbb:4ށ-VߝP^KPfirjl ?}}/ŷp )*o5wo?|\1 6GKFFl ޻EY~ei."6=%/52N$?SsSo J4 aWקK V}/>Z^OF7X\^br5.rmi^?x"eӗi&Mӟ x~1*a3F?W@+B Pf9e/!x/U~:?ti?{Wcψ-&a7Q|&̈́ݽ 2 zoq.ƢvC#e&?r}%^ p#~ȁF1j6N3Lky5`L}9CrlWwK|6ݷ7cVYIZVéZ>O Ln .;yV>&sX5kC㹤I]L`[G6ֻkRW[q:][VqVoLEZ6ןIsQ .TJb4h^g KKVYpOx/YG{sW=rT3Xj~#IJI?ͽU^Q3=3=3{/qq<@;DVbtZ>м a7[uYZz0.{eV/||L6xW_4i2Aqx0K(/IIϞF^Ǜ".v H"b]T;Yִy6['gWZ}G}.uFA.܍*rWU1niPxuMIiu 6H jک6w (Ywi{i}HAvaCb@<"DMFHӱeۂ?.su<6x})g'a HYtD`s̡0\0ˣoZu;'58$t-zZ혽'`\9JT~#Q=D EAqdI #NY6#@RMDep ~v"+{Gmp.!ǰYh:ya#! =!~g Kxdgb*0DnaH} CʯǵZBǛxʮDה jpҾޠzEu6j^=)*Z,(4d:TS;頀{&Q_en[Dc{4 A<]cPb;EСX`m+݂?]]]BG.K.K.cu a1:yLcSulMձ:6UǦth? ;c޾ʛYh?A9qMxQ`0@ۏB^hvaG-3p1YMfG[mwQ[Ϫ71 ӑ hkA"U+Qd`33 9ֺlU+;ŒFm(lJi ^`dE?V~GMqK\~s._#X~Ƿ[/mtܬ2G:jkΫEnj.~2s,kM[o}٠:wTXc- 1Ƌ,L*L €G^_I1=Q'@y:$+L@ Y5q d2OFMr^Ul!I`!^x>X{!qBafD}"a A)I|Is̋!?cu),Y%6Cx ^S Os"ɺsJqcvLsXZ^ Ŗ#.W"}a6A$z")%<"P>|+c#A#FoUZȬ@XLyy;Gg[" >"-/}3 (" A ]@MF("gX\ 6:Mݨ'zQzLlL>M@r VeCbHI*H? &Bg(ȭ"va5`r/* E:n]!tsBgD5S#-JK V~cPԇ)c2D8Weק-rc޶,LPf{8lpgf$1(h\r0`x|-1Ζ.u!PmP%)p @K\%B1˨1`qpe9:_볿q|Z.-φkٓQZ>C*x/OEł5?vR+w58^Fm)Y"$KQ9c(KaSuᮯ:T> (|UV/Q/k\j8A"8})>TU'켸=2Ҳ UiV m(P%en؛(;RGѨmQQ^gq.K[@i{F&){ZpH?my=)$ÕURQSuVV P >^+ [Y6O\T/8y.M.G"hzҰKN0$xO;0dr6?(PO#^!Ds( MK"ˀ%v_d"ATpj^H8%yd^y_}_|t'*ixmLY@ =_y7')<6.zĢˈ?60ǦAd3n͉R+ yN󁤘m\N=F|r(ead+R i̋ա0Ɂ #ipK3˶$¸1m@r!o앾%m'WIz/5ܿ.-]X%#FogYSL->^q~Kaǔk)[̯Af5.6]8ndTlv)c x;WI\"&Lk$݈7gp Uw5C h҈;WF]ӾMp`3s ;S~Wb6kxlX$"&!bs)!0  |X1&wFZ J,!X";9HR3rygY6+[$zV)M-Ii=|} \J6 of%?0e˄9mbKh3;;nB^#4{3bsT}PŁ Cc(r|ϗqnRC"$R/i m*t-0T#~;W)2k]\t*Kh`Y i)O\G 2+~ȿc+pdB[~$x#8w%_xԡ|yX~ˆ<֔Z݃g$-{QXy>"\b|ԝ g;UØ.hM!|fZ4 =5 b45hI;x8`Wr0 KyTE]aMb &@0}*"Hm{.`qy cГE+(Bb, )PB_Jpbb,Oks(7rx(%;.("ڈN jP:B)?B(d"'󅦟#GsY; Ka^U׶V sT'Z4rfP.̗6 {;( G 8Ksi݂^1B<{6 }x #KF0`0 b׾U#'a Πvbp@B- u,,ǒN )Kw[*KΛ)Ph|#g% %<s81c bH@nĥR=O$#gKP+F@:DzOwQX 2'@d~=blIlݖI84ґm?. Ap)ˍ|W 2ָ *FK4?Z ne踶#Gq,*Ar0,F&glXF3+W 'Q1r#K*"Q929.]i {;H `KԹc[g#u>v cs(e%vpaP*cLb`3P.\<>7? Q19̾rrmYrWbEw#Pb1rY%T.!?8nOMEV CYU(lnbpWUA@DN`QLDA,nQܱu-{݅-Z?\C[xYM՞I( CZ\VAR8m;۬DtэWmKJhC{6 ]Dsmi`حw:LdNzfmOtjZu֘ga-O訿I} I" &zP0џJq ja s|4*~G|ϣ=tq};V|zWc`@mƠXgbQzѪ v,!p;.VbWbl;V}cGlO!{}cY$CaŶ%9/n Le|/Zx< ~u9F٧G+FzPx~ا6jbfq@Z5aK %䱰VqS#붬 she3JC}L%pifx)nmCai`ϢǮ/( W!616f<@6I4&ް+B?p8*́ii_omc&/*hd;6jTQ,c4N`R<ܓò38O9FyH)ǀel%km~aW{ +2ֲ![ 8ƾo[r.kuNQM| ߽Ws6jW l̷ʡLf%4:SiАۍ{c9֕ mfZf &0*uň+ʇތ.Myp }z7KEr1xM%-㢓 f~ghջK2埋4X0ynd9n J`!{l^jc5KZ3鵢mgԳ6*Jr1ş6):-UJZaM Ï %-~Wä;a}W+9\\dj# /IK_1 P0NW3Wg ]Ǎ=`3EǁSa@qCI1q[Ȟ-4omyt|ܲz 5ԉL@T u?U6巧Va lDqȆ ,qXY12q{b r܀a|tz7O9nPY#~)>w/L̪ܼ)\{:,͏<=%$~d%Vlz[#a_t2yWKjgt?s296Xw^}בtIk7[Cj{=8.28imy}FX 85NJ~`{ϝKY#ips(V“}7F![ E ')NR]ԗOW$@Zz _s\Ns) 52zgcΙq})LHVSZ6xN@A}?r C~M"GvHΉl~>B3Y'49GO5H"I]ebR-RwAk73S%=\_Y7?`0Tz~|'Uۍ%FQrnrG`_uD2<=sgnjluΟv 4:=hq%x7%zԉ[OLKOȫw0mꞜDVtU?!V bz6 }?( y_}zퟓD k|IUd?_Bhy Mre=Y첞[J>tt]RYq* kQYi1`F,7ɏ?"G0jjztT^o&?oih >f)V8+2*4{M]J]Yl`c' !@Cu5r|@{\ Luo٫bI^OtjoR{1um V eZ0@UrJٺ hRז6!K C?Y<#n # H2mLI[Va01`-X`lY))$v$7Q|> `. eבI#>y |=^]HРLHJ5{ʨ9xEyB<7N]^܁z\7;v{03}ut=ܞ:綹B27ǿ]VY=6/3a.Ʋر,cpuW_3/Z`&MF JEЧJN%z:oz\b%ŀ6Ghڣ,2BHǕFu$|w5@Q3az#9;glBi7)lpQ!r"9I7?>30^Փx^ W׵\UI<⦵?y7oOY~ fQ3scNN{ojyTjw5n kPާ,bb;Wo`aGun+X5f䕖uTY(׵]ێc]m4$׵ ^?wқ\[a׫14o o{́tMZm~b ۟@3I'#*ylx/Z,E.6ae\!c-/e#vof$1Cd^#/7ww^oDן;?\zۙk z/_!h0>MUnEp5o:C +=&iISF $n;_KK̛oM/F0\EL8|v4i>6ο7# SV8d~ u6M8'Ҭ7ހ{WtaH_z $u#L=@>mGÍ +7)~pwM47ɩn ;b‰#APD*v{1^{Tz49;Eo hNtYW #D/ާ9g@ht<$~;y'J `L߻x`c1絅#|_nPړuZQ0tb<1P|Q2vkߊrXqWA&Or6t2o(Rȓҏ>Sq:L^y .] qWr`e!)0KӮ]Fm`ߺ\!YG0qI:PIaB,+[7cBZq\|;4d<~8'l"LI'A}TJr[؋ A8miUrڃ ߗs< ֈ;*7pJ,'S>hyUpU ~?r+ 0>Zn4ל̺:M\n]n Z͚]XV&4#[FckM]È y9zp׆+a6hŬA sp#域KlGbKZ ۏhV-aO|]z֗޶A7ClxgAG A,N嫣{vhZIӅJMV@RAmSKj['ʦ hZkP &u 1f q(KgfZ“Γ>.<7gmӧZQ~-tf'iN2 NVb|"4G)MZtH< ؘCӤ܄δG;iW@q%F60v`R ѡ+?)D>, AƑq(#NHW0yn`KFE"aoᰰ U x"ӟ մDg \6ew)LIھuz,4ssnER*+|L= +4b.K;C[tV O{bNTEȜhXQRA B‹}EmR$Cd|0?]zC!#0 =BcUw uPXHB\&\, ( ˸ҡB-.{_sRՆ*.UmK]K[3wïa5J Ё.' AOt,BoiH_6N]*{ox?ً,[Y;1'5HRTo >*O79+iwȜș#ԫN9h,P(0^/M߇q]U 1 C9Lv S%b6˿PNC .egV;Lj |vU-+wX1V9q8 ݶt`'Ҵ~uQaGiJ[_#nTWmsǭ Ef4h+T!&"nCA[[;]OL~۟֟Q)J6AH2JIP#7*9ʐlk9?)feiu-c&ho}jJo0yTOtj^:Ij F,e eZkX߭ `!\G4f)[+ {H#])^ڱmڄ9~S/;Q/XpFm?=W8 RCo=|ck{aƄ :lt ,z0`g#\6"?j Ͼit=J`>7eϾ+ֿ `z>W9}_+xԍji`jo<5̕466rQLG:G{*WKJչnqsU:W_38+]cU=<(]}tU,򓴘xWq3PNM;كmVX埝 kK,VQH;c[}I2K(͛se =^c@4O5BfH${`8HyX9z31^ovޜ;oΝ7t㈍dyd LyE$#ExPCVV*JeE Ҍ dEm:RL_| WdhzQ׊v`t'FؓSoBޔKY R49~ oniɤ>P6b0{eX!22"ux, 'CB!RPa/ΰ]bh0@qX{dbwaSI1ܝNZ7(6pB[T<)A4͊m&(_=(T"}vy_T5ryUZwܐը#*A>."Uf5[9Ch2z^Z<0< *}El}e;-:M/l_/ Eom=U'f/'޵$0ma "v!SM:S=F35![fwOwկ1,7j2[`;`ןW?`+rrP~֟N6R?z,]Qqg$m"qT$1xē)QNFPY%7'sJDkNv&`N /R'v*FkPBS.,=1FpnЊs+coN>ӜڅOl|/f~ vo a΀n.*̮?:Y/{V#1ڽQCI6.CJO!#4aVr/wN^=jIñeV2tJiI~ڜFԢN^z]SGM`Pd\pYQD[-=GWn{~`w.*K wnwJ]QCmKъos:Goz9OyfNA ReU}o?" ~J ,wW[Zbw7e`Kg3Nϗ×sf g3bYWW'f)^)C=OdhkQ_ŃPz18ptu0i.%P9ExtrW^0R\gfֺ8?m-{UdCl'ݦt_ۻ&EɳU2Z{xut;le>VemqRvA VVJ$rƒ2R!"(b` hoSQurN1F $BLUzoT$Թ#cʰ-QIҚq鴌yvG %qEkPA8Pz l%,X+QpM!0E6KkDŽ ,{ 8|OgշL$3#$XYUcbye"x3!"yfy`t) 咮#M B; Ψgzm(X! Y c#e°h0 bÚhJ`zl=#lFُfcc SJ^^ϛl|PL NyM'qY/߃Ty[,+vlAǞ#Y ?> lv!N>Ƃ,d".EN'45CI77B%p B_÷Ŕ~KK1s9zaǰC.6sgH{76lqzkQ$r둳ǹp6;RQ:`2MF-yHI)S0h6v@Oإv:!j}N?<+G~w3]uL oD~՛7ߔ1lp2EןO~:}q#_࿟ ]n4+weX..#V:ě vD?vD n t 0jv[!^Ƹ0QoXȺG_Em()_'l:]>d}F\ V,01YF36lP{% on!-[^2TB)vHPh%-aCeN(=)l׃Nm2𣇕iEGq=HdOme+Z/{NSbxߵt]luD)tkS%r׻p[+$h󽎐 !CϺAKf׮mNrfφlA0`GJCx8{}ttϭf}ֈXtž{xKwgϤ1&QCGQ^xl19IO" -ĆEy3g%(KQ^nF08V[$0QHǹF{i\P9J]Ҝ;S_t IaKy1R* # 1hM W%ToH59BT{սԙ磗yZQoSYh%rI`~X9U+0'SJ6-0N)yĤ7g]NjQ[Y(č '`MF?d<?\/NjMi>Ö]dtט}iILu6A&+aJn!ȁT``Lș<G/=X=uw sF ]IW :q 3Ix%*jb^:XTBHb0*"mA;Xq>򓻯#:#C1Bu}NMYV]wMPdq'ٽPysBcj}'X ~l`Ӌ)gإRbνMټr6̩bV]0BUS>;sH N@PJĥȁZ(9ៀZhK^=<9EdNW-9y֯Q2ZW[5|o7;E?4+˞> W?ɛT" ,m`?" |6Y[at[ŖT8X 5 ؇5楋B|h>}4qnc{?C$sky\BaPtJk7n^7}z<=TVOz!rɊ&/WUc҇ƣZ38a]#Rٹxk gIWSi: &bGT GƔˡ*' :ނX|=_FA/l*yo3L9twn#xuVhXn(|;uo2JqAR$$xFOImF.lzVꍤsXJjJm$mgt/~L!n>@E^YU OGatZiwLLTwAHr|t|[ Pf.eh0һ+Ik+>W#GIZ$tsG+h-390 ,a8YlYD"X*d% AZ26*޵kuJAJf=HWx]ŚI8oQdZ/ maO.|.͊j?jtR6\~Qv|К8h{S#A!3gǖ0;iDENgcԌ b,t_4Y0[N>"ϒ>X0e G XyT O< JFgLXBҮ~wtОvR%^Zze"%#_X\-n>߭򍾜[^?U8%J}ûy# 8E8t=Nc& ')S l$.F׫z]%z‹)9iEI5BJa( ,2=1F0gЊs+c=S׫ZNk\f~ vVYEܡ<΀o.= ̮?:zW>:]N`̧VR^{˾;m=e6trg= \TIXaXeBfuՅ\Fgճ4GtRYjy(a<&Z(i{嫔np.ttPXY>+unNhVf^zT!Hth'SǐY+;'Ho{Mؗ[h&WQQb)Z'zg^΍D#4\_gE\K%rM7`-cA^wȽuN BC}3ifO"zSj}<6o ss!l`Y7=ߜ'߼tN:=TsXt)f&׹ ւ)S5>E/=˗B93{tY{m૫3V/냡'`j ҫ EI=Iv6 BüOdGҶ <]6/;Oqޣ9\G:--w8dU[ )CloCo6VYdS;fTwh6stI;3 <%?ڑ TOޫp=AЎyolA&zVt~1R]E~qU-۩EđD0~zvF+XK%iwgzf{~=hv'59Nj>~´v=!cPOx_'X:!OhtyqQ4ˋR+5gnVc0Y_]o^f1Onozॹ%@ۛ_,.7tpf0lþq]^v}g:coj9eMHL#Cd&PPU9z3 }@4bKM(:FE:d$+%!hIK-60W1ɍAGk

N3\+,9>3Y7\eIݵuva 5E>ÏoaEz30 rPEA&` F?rU]]C㮩bktMqM::\MvoqiɪX}]]{R8کd)f͔3]#@6y_dmU/R!ܹ/^W5ɦZ:dzZiۢfcJT c3ҋ/Mzv5߬b<xM fDBEE0 371€ l AR  iS\ ́ L21,׏azlC6BifC/-hWfP ,:Ņ՗2OI/KPqw3\nZ'Y)U4 v Tsk.3aʆ}Y);{Xcx;s>P!,QH ir>mbXQq4ŏV5I͘ EVT\; %*sе؟]8OJ$9 Q,J!ޣ@ XJ7сܙ ;*$v=:0QI(e~Zm"#dEtCΥ+T˩5[<"3un7D0MX24Foc>wa?5(HQxFBx60sy\n` ޽HkBkSΐeSUK&&E% kHȸs̩Tdg[@upybsnO3*}[F?+Y)@`y!yt-8[fTI·9 ԚG*r??:$)zN09рuQDhͬb 50n}=z ENQF h6c("LE#qhDR^QnQ&랡^w, ^b|f`Kix#)Bx@lN0gtj9XZ%"ZL!e(  yRQ4yjCkWn{5h\ gn?0b ާ c@P^%rؗD C!z1D :AX滃PLJIը-%1ƌˍ:!F@kBHQ>'E/聧WL(UVXA`4kI,U{V:JJbn OKp %%0YPKx! s1aBB1JSm{GЬR&FaA6%vX#cOHl #ML "z-`)K˗b@fEuXӲFIۦmA(&t 3p^[l9V1]0Jpn.Z|`7tkl$77Q{"xA5F'rٗvV!JD?v- [Gb/:hPEKrʍT6<aCxr\cp` XڝLww =Y]R{Kӣx'L\I U& [vWSgjpd.qqN!s;/^qW?%>|*w_a YY!${G"|wdldwFCk >{4y)? w M?L>qro_6f__yzDߔ+Wͦv>8fݿߎ$y^k_ӤdM.f"/́7f C2O=&O{洑ʄ %]{<ǮU$ͧ#V?/랑-QOkgnQ}}[fEۃ=sepls(I:.v dd Z͞o`.2 K!5vݓ&2bnŏZA 4ln _/)O5On3;Orwބo'udkwNTꓻ_[a>Qr@=ۛ~y?΁5!hӐ)st|l$p?}nBw5 LM5]1Zqvͱ fVa/'ge A sOkbj!ZL0C_<`Ƭ(M'a@[ɿL0)7py g/E qpuб V ~㮡: wnmM`q؃ 7ljՓvfDUI\(gQ2:S,`! F &*.Љ<6췏a 8dx!gMvömkJG'8H)ƶ))k'm6DWyIX)P$~}QHf Gv~i?f)Ta@ Ϙ"oxNpfC SqV#Cer?-(!(uH]1}^Vd{7tPRm9KK\/!ZsTELeTlAWUC L}UR< 5׊旴nttlPӷ@Ws(LCIrP3:)%YeKG`O$_v074 ܂<^R쟅jヽy9&243^EQ fϥ?wYIsp?/'S;_u/?fwI,&Mix_vi-^ zXyR`*蕲+c' ;7gIZz&f9l_gnt~S1KrIЫe9ac kMEe3|qG9ej|}'U$ڍH;lSQ?9,1 ;_e}l!SRIX~=B„Dm~z486$FDViGd:\87vDGέ{$0,t-޳WD@ry0 ȳzO { 1ׯkGsr]9>;<ߓOy܄I^߉)_:y'6o|B/C  ?MH?jke2;Y.t`ṕ?"G'mxD7&kye!C>ro}1@";Vć%2ֈ5Zp, LߔO$#"g‘"'|Q0rȟ"k LDB0EmdȖ(H`$/l_XJ!l=$QչU#R9j9OQOnouqJci;8$1c{6oܒަ#3lcQƛ@D^,/X2SlZmS1#&#fFgmh7ڦGm6-gsfY^-c_WX CDWPES ބ Ï)6J[p 3J^YAV@R +n`{%74SYo^Al[l-,?QER#r ˥Bl~t^Gdr'gBKIUN)g^)_sҦI_5j^'*Cίo#K!b:uۿ۬ϑlX)6&z䡭9fwsxe]QOdViZoPcRfi 8%9i  lf&2Il";~ %ZHg+B,*빑Č #vd&i,Lѱk,|==Fp,?c:OPղ$ eax gUQ5)˞g5TE/ސ KodcN{< k$ۅyW-r+bD!֎b>(2os!#S'؜-'6c'y, .K%.¹2Ε=J;<| ;l`'lҴz58QGV`B٣C[,s4+`lj2 go) KQŽRqk罍F t8a;>^*l1n-J}`#e<1L(sL>!H]"c} MG iuy8.|&CŃ JiF'WhaP^F$t2ȫ'|sfoHRvXû|@t=J`1g}▯>KXH]X^>Y*[ 6;ɨ [I:@LT>sƘ"XBWV8nfgλJsz~Pel ,S nPhcXY ̷{Aqdɚm%tzh7}Hӣ_$a~I yv9dFˆ JOEAM}車B=cه;5Gbh1-NT?TOq¦`c#h3G`g>ndgus^Ϙ-9EͅrCų ꔐq1㣱4C }q~NGe))nX>|$$/^>9SՓ"П=&xL$HLĮ*dw ɝQ;ўF ƽJqgSHfd4<2ob"дLKq T2!]4ˆU ϔ Ev,+Wb*b_C+aWnw_IȜJxR+N3v('KBxY~  hv#\тC;}!w-ί(Սϩڰ5'ݐ-)Oޱt3, n(viӛ|nFM D{Ǖ "qY67~ )ojDL">O yFcmW2Bm 9$hV'AױLҜqT@ oIzN-ed8-}[ܒrJ'ہoh 044t궮mi%Jad깤d9 ?-K"YqF*e{oz9HhMu>H R9!sͿ+0[c5UeKsek@-%Mw-Mn誡L=0Jf`ign˖hlh%U3tEMØ9nME28G#Imڨ=mGwmvP˴' v*9:,ePeUM& 삘[(ڊwX{II-Woss'.~~{^xG;释#ߥncN&HmX;߫ܩ1]sO{T$}"QEspvFR^K~dH1믿bA0PS %.z\F6Ie]'€ZO/Ij'#7~DLO!KZəYk;~:+Lp'0K>W[p>pRr D1{'w<%DcʃEƠ$)>BBn 70%=I~X>,~() wEM8VAu{n xy斠*EX+E+ ,qq-㝛2Hc#>yYB>h_iò3j;_3Mekuc{櫺O{[^ꄃyj]U=T>UM-Yr-ː7J[rC5Ԡkfjko5Ja#^W&I Tj2a"K`'35l˲2/?7V#vnݛ&S?8Zx\?\)(M6%rQtR1Gl/EmkԽð0 e&ςGeyu@hՅ6\(gO;9?/Ї p‡5-M YT|O%˷C1{6$ !pX`q"ֹIldDJ_uσ5Ç4(i fUUUC龄 *Ńk]}WX֬ʩt-8ޛ_Ζj=E^u>]6N%uN/謧IИ\|zvyr ʼnVQ459 W%&L'>> CBa ; ZwjU,!/ ,̿?ƿ.@Jrnje$\'Y/TWD@&`Ek^:}DDpAۊ̦I;Pr ˍi؄yJ:9r,ǣ_9E!aQ/,Hydah,Sg`L/R gMd M`vvj|'e1KY :vpɎ3]Ü˞F4 XX/q;gSB=WK[Q[F>"R) D*mDOFw :͋ ? "wjn/ҽgR_zS*Ƴ&3"x DN}-Gpd.ym?5 vޜ%9IfTn銕@tUTV#sqLE@TpdPrG/hJ kc}p c=#o'S42%L \ #"뵀;,9l_9GAcNˁN8>9Is=0Ewa1YDmٮ.J<3PU֞f9Y͍vufjX>zzwA22z=.+53e8q؀?E)Qa]=29Ηnr%؋q/~,)퉛7=U՟_m/g;S_= yU;o{8NW| uCANM\ۥvVO0Ym@W;SHEm''슩I%փNP?!]LjX+C hǟZea:dz @65S~63@hY@ڗ=u{fUɓ- /\O)y7zm1mh~ӴkY ]۲2\:miï?b2:[W'*nq-YT7ʬpOcť[K0yTZ}jIU]N塑9ɤ*VB%ey]CHc6-m|]T"Aϧ!e͈M?9S"'qGpށWv3ɥ\W1xݭQǎl+'E/r+NWHo\9Ryy'Bi r3  W9 ѨB0+^ ¹FDFesZL*P*5 &GYNΖG"7^󒒻J ޹|ky!2(FYԆ;g8.`-Fs087 0%>M.\QmjN>H"0F'VUd ޴(Z'1)[Z~aykRJWz4[9șAm1>jY0h&DK5š{CA#HOd֋5{y;'/cH}&U:v&L'k@^ԁ'TS{?|OgOT;dօ,r_(,est$Iٓأ'1=m)흪-vIˌPJDDIPC]2pԀ&j@թU*r+Ig3l#X10B7Bb1٩:[dܫνy<ڭ\톝4SPm?qΚC;d`k96pJDQ0#%S$*CT=J {EUWA(I#~lv%(%$ ư4kϙ6 X+ʢуΐ` DZHL*ݱkD*c2T("2ZH(C:H(,zCgKaQu''^;QĈ$R.,ci儶^bj Q1fakXEy#s f0S\ 2eH0E_xF2Dl-CFGF`/22+w X00R"!)@G)&7?{("Ws4bm;Ą{&2;d_*!l\—.:2Qh2vh&l6tQ_z2RJy^}_oj>*ꂋ[&\f[[WmfNCs[W y.2ڬv!劐TcꈡXn5ȤRwZ͐Zc VSxIύ{YlhyUN,.#`*BSfi^=XCQP2ָ,_^^=G4$*(@$t+d rdhJgWm1O@ hݑh\TƋ/,)}͜=}K0\FMT 'KsBPK)˭)oL&8ƽxy℞ŮSݓ%^g]/ oun3I)BU99>l$o-'Pǂ3 `?H,lvV}Jz^VjJsEhc?] :xlZ1Xj'..Gfc#MO! 4rf?f[dQ(`'g(U~Fi|exTu<* ja MQ`¸šh {cpVTݎ(db|ؽ. Nhﶠ ӂrɧ9l!&*)Ԫ` B, E^hE.:XYZ$O¼?~8[Zm:Zb'c\G=i4UJgۯRTuA0@VET(*PG*U '8y-`y.kG%2A|qr 3\+j,ۤՏs>^`[`}9[S}9Cq/S<{7,:o~{'xSvfۑU2g2z_ {x[yhvp^T8(<Ss- MkmmͶ/}Qǫާݝ]>fDO0ڶ,f="]y1+Fw&$oϕ-$E/cÑ Grt/Nvkjp;=c&>{%g==Rh==KF==y=ZoC^c< sf[(=U1,Ցb=koe\@Siq473V#K>GpWokxĶ\rH g@tSe: \J%'CN>'CyC|Lda3aZr|zT=!Hth'Sǐ4P+;'H|vOvjZ(]}zʹǵݾڜnL˹hݻ|X(iΗi0 <\1?fa1K-rf}-{9w7w }==MymϚyWf\N#S{%Snon(i4RhkvR i\9<wxfuݮobDJgzu .hoAf\'f<&#[񹩻*vzlUCȢs`lIX육2` " HK $)pĘtP!dR( č 9]Ϛ;t1%=V1$+.s%'gv"21!<;yIw_YRP8&~?;m^M䦍c0Q3/\0)f9Ar"d ~) hE#ё< _tQX-"cd91p˙s xJ8'FD"F} Nie({=\I4h~!h]42mo~vr;Żl)'Ōo-T5ve?xoAy/P:_ 3%d9,?R^?TFda-϶ޜDxLSE`ſ*(l]tט<1ggIB /wlaX38?d?׋*Z `>x(I ZrG;lTc _kUZ=7 <7LmqzC& XDKdQ#wQKf_ [M]<ۻl4\yŷ&Fŷ*HVmTujpo+hG?6}k;ҷ䳒pMV|^Q\ՊJ,|;u—aF=Kk UNן {NZ6ΫN628⛙tsg)`|XX˯ cm<@uEff Z3[Ducj5g '""EnRT5$M !9u#­A x[O$آ5l;_2t`9-Q.&6hq.0r{|cī]Aޏws;#-n!.hBZ9͒$,bSC~/@4c~m&`'YQU0o 0jRWϊ;T5{Q ߀.ŝn?b:I~2W,H yJ` BE$2%&%&XREn^?ToG͙/60m545Qm63ͫܳ^BUwZڧN> S:nfpr!EM%e\`SsNR!~c"aHT,DUͭ(ZM¼\E6! dIXL-ǫXE!ZX};˳JIW{gF˰oJ Uq+\+\6%<[,[]SLZyx0֧Հ-9.[Jv:gB7KbFPvN.0l]2l]2BKR):͌h]q,E8T)%|Td"ac ¨KsPUuVQ\wU#+9Ú0Ӗwyҁ8|rR1hɨ$Xk=rCỈnt:LkE+->ǬjAyvR䕜Á߀"0mb꒟qq}^0B -y"}fխmN|v1 X)aswWkq }8tU1REN( & U/fקao h>sL ҪN2jtX"P r:X0Pz$ihwZ[{R̉l|JOO=yk 7{%)-6 "03>Kqh6P20rH'U#6kf}U9ApO0Uzſc5r_XuS{7SjZs JW4FȣOf0H8()|G~΀`  [E9B dHu1k zhcTC)=)H:9`^%܎7衍ò!>@.ҾDjY5*ok)7A{FI4 t]s`=AOf,7ۏ8]#Hx^L)8P 5s(R "Ђg"m ǀHGJ,lrLoC'/]c|MrreF(u oCB4!O #09;FS uIJ3f,`Α0Bd(h j+@&MZ٘rs6C~ʃ /ĵֈŦB6le#aF4\yd Z:$rȵL5O(?C孱Ґ8Q+jP٨H,@h'!aN<(+Sm=iFJK̶P% L(?C3:AT# fPY2| Ȇ0B' Q͊m&{P~)ZS #O6hqeT|^J 6s$PyUKs0RI \qMf6SO~`哩\bsRSLq H517H?A}v~}N_P7Nzm7dߗޛݒfgҽdbd!f4 tϤURjv !a !)LDUZc*]:0 #w> W EU\HY03Qg8:Η)j:8T[Ba`!a1hrSQђWkV9[T1PI}Z9F(<󁸙B'{A*&Fe i?GsѡlSd,۞#Ħ b :oϑ0@yÕo\PXQ71 I%[eŪL)?-a&1NfPVin ynl6ދ6ϑ0Byi`^72i`uBu.*>̜#a^ VU9QȊ"+W?H|y VLU.NY0drL<9F(/~!0[*NW()N\dR~Zmt@V5wf.TM*+uP~ʧS)o4[_'^]o;G&+ZlSǴF%yΑps8=:k`s|cFW<]zT! 獒Q)WcB6[H92pO~;mGtq*uM+9G/1#a@:'cۆ} &^sjn Cᔰl2R"QCdOl#a whpZ:;FPQr,hh{!ʇ8B\uD.!!X {UoȪjmh1R1+R;Lo5Z8řd9Dik,;o:3dN߃{>|ā?ppL?a}>bsv_dc3ڗ1 >$+K~1n8]&[2Hب9OUD͑0.G yZ5)_OT¬骋M)?CQ*ptݳH;PirdֻկӁJ qi ?K[ec|i Ȯyr;Bs,Kw-Y,Kw_%H.U-KU˜eDRԲ,E-KQ˓$ĸa(OtwꬒηOy~ڵD߄Nέjm1ӛꙥzf9f ڤwu x7x5Ph^C!;,5N X+=ܑ>:?_]O!ħ껪?OEBJ촿ɂ!E^_o/,Dʣ7reObƯJx{vlu-@n }sJnL1dh>k1aDla;K>|K]?/ߏ}r~Kzpj3S{:T~jU1eZmfRAo(yH؏[8b\ƖXXf}[wnz8L͖oyބ;\o~a{qN!afmJU*!]g`g"R65lZh#Cuʹu6W^Auݜ ^qz,ŀH:J%ZWK)RRkX] `t$_0P�~ ,1*c$k"d.9 NR#VI8 LW?~,6BR6ѵ!"SQ3{vE,ed_ac⣋-XKZڻ0~YgMl!jҫv~OOq];(so+fʧA\¥7jb"Zs-_{E @M\ ,5'Kϗ<ɗ[yߧ{]>ى͒>!fş/z#p?'_iG.qEi.+!Ð;;Bju}v']G{>y7J%Wl} i753$\|X]\\,bu\y㗛f@W)<)kC]O5 a]Ǔ*ϧ}Ob?AkZ]~=\QC?7+ ?_فq\; THa8dtaĆ-o 5eH@Qf#QҼA%JCqGF`"BT-jB615'2r.e*j 0 0' gfGB;%を %.b?ƋbLZHhފ1J<bk.Y9xm|0>L8p;v*/[q듫X-\X|FF3n8ijKdrcͥ&]o(]MH3c Τg6"ll6cB7#$#M ]ȎvMc3ZfN-zDغڷϻ`삱c_!%Mu`Qs7L^&=DdIP, XV\mΪhT6 7=#={fNAU,b*Te :tc)dա0QLQM) frDm6&S-g )z c2e;8:l83f ,|[Vw(3{ 5[_P<cBŭ'͙:ˀˏ{٠D-8 U?j$M%%îVF_$-3XyMUT Z-e8g-b XPF+ߊKN]18z3Y!%Mjpfz⠰@Տ~)c?&0lb> 5Ғq}-  S.&>k'` o !!q6H9Z%ob_'2\qb+69PAFM9}-o !D_l Z)7T@ަVS?I`X|UkM l ۂCwMˣ9fCs_mџ݄ Ŭֿ*JR]g.*\vz[:-tfd^dK|1yj8z}qW׺@ڸ0jE7Aqqa7l: +CMiJtR%f̘RbBŰ@5c[@bHYs>F |?i0F,C]ޫu'qOlm$0MW~p{z6˝1 VVIՒjq7"TQ%ѢlJ1Ɍq"##I7ٞn/Ma³.&:&Xj1OMŽqA[T:cB̂ɕqGPP "29Sϔf8#zNaů'iCg, Pd&H]F#"S`v %g8H(n\gnXB*[h:Er37< jڥULz>fқ^o oWϺW_?aݕdo=`gƇ5fv̸:'ڦ2)O߼v^W5~aȗ!bTlJ.7$;Lj(t-7!9Y@PΊ?#8?I~ r$+q4'IҼZVfGf%i~' |q7aubzYInᥑYiGxTt2!$r/ sxu߾ok|6eϯ, ݍsCvXPQJ }:eDH9SL2d^li+hy:\܎s>|D3hkkXUĥ]mkR܉WM3(QSƍb8 .8\"ʨXJn=HykeL`01nD*Ө4*Uoua Z;ϽpX5(*sgUMlʤ25;T/ڌq3%x8;'U,=TRGNa\:wW*ssl,:#i4/'j0a6o~zpziK F κGm%wdukJ0kbW#I˧ϠsfKo2h;Q00BEU/dAd[t K c)Hƴ'fTc|zքUOU7$mw ENQF 1ƭ"Qbt$cEkl}T |aHDE^/rťլ̾O-~0$LhM0dɶC&i=`$Ri y>a!YjvIf҆q?l2,Ցgb(xfaYVrԴ'i5{ɄBQe# 9`&D0ht+ɘ/;5Mmn׭B󓫇OGbCGAm:r)nt\t~4Bre b=3F,4W`Z;k;β]laU[{{dTTyQ6>7k-?~TFaO`&8,:9/npu 9<7Df6}}9DamN,G޽*(59:'0KXM77z*NA~sQ[l 9 u FWvwU ZlMYF"6+;x$O^^w)fS5fih^]Y:xeX8:x2W)IJmB6m6va=lhmNˤMFSv(wJ(Jo 1 -HJ4<_[SͶh k$[=Wl8r6{ؘ+ T6oh{4k*--I&z!Lؼ1s̊t;Sކ' F@/W?OiEjvr]O;7柼t537?YB+C Қb&!Zs9۞. -F'_%IItw O1jlMm_~̊UeU3eH9l4U6_>yS>Lx&c^ygup0>؝-<>?uGj΄ղQ9˿ q{PWlC@ΖXc5ʘ>H,Z"pUHQpZ&Q1`G,zJA8bmTRw0H[ˡ,ΎqsũKlɤ?Ͽ`wW\5xqfz ~ZPvJ0^U'FTlF/L|v200Q_WyjۧzqK~mݞ5p&y3^8u `13X?9yGTZƇ Vk(!تsi`BD Qy齉:nAEB%4 Ύ@I(d$Z8rRK>:̕wLrc@`h QQʹCl+Fp"f=Wa 9p:3Dӝ"1:d'')Rx#G `"8|uxY_ϰE5EZ[AuqafQk1(Zdv3zT&PGtLXT UJ5 U *B/#9l$z)`wлJ>[`ڛVޢ&.];PaۘG<]{KVlvܳw/$t>z´7zȽ!q}˽BR[?&dLOs?*\=k]%ECoMփ{O>m09`douGB!6G2aea0b~=^OsU>E-'~6AJN *1R|S+rvFRiBV.װB9z+T{f-_o~̿xw7./_f0\A[c伆I2nym7 8B'ƫ{wCQm7˓7P!`(&Ci4|-BetQ^+A{=d]uϊ x$O~4v`F/@z"b r7TYX|sk.{Kwv|~|+a!Oԉ@ЋEeGתYTɧ.|~Y\Svoq+?q)LYnk3饰z7Wd%5),=\.RqgRCyKx^<띈z2HAWm|]D  &+c( gga[3c?}uq_Q3b'`d+GO /1vIshH9N#4`$K TbJͫ uw%.ᩃN :5^iF`,XJ|X&H!+2 wC`8 ~56VzhGj4ޓqa `;K!{k%xcV{cmQ Ũm~IN8(gEg1(2サX E9!Ye$]t/nœu&thS.hG)X3gǕob!.x9\tkrӓ7nn2хU4w+Sf[.)1V1R!ɚ==8bX ȦZ Y )42'trN``ZD Kc/]([5_/)+NQ`>8Q$Ri"ɍqנe*yz6QC hjbYdL2iB`B)/wD;KSq?SKɯh(TxM&h%xAK"&&8Eڐ+oydq/@S[`Lm^X֚Lፖ}pf~(J,aŨDj$fF/F0L"meү;#W z̰핼UZ<9 ek'OG\$mq'')]3KL5fA|0Y^ȴ>c.kɜ1N .NEE( 371€ l AR`5P$ 6bo 7TVSx~w/*6JD: {Shur[u2AoyDݫ/E&dJP;+Ú (ʓj<2XrYkoW u}=IfAļ,J ]\[s jKL l7d嘘QƟ/Kyl#S< IM3 *$zj&N:5Lrģl_.+T83?:g‚1*|*H09Q)R"8`!#V U!J-<4hkBK_o} 8Tk6>??}lI׃\5 uJa^TKw3@`Ukv3e7#If$)v3p71dz|yT-aȸRyri3$ЄE)CG;r(\t>99?X`î;{Tt7L.`$ߍIkZEM7`n_7N EQIfo4!2n9e T* 2ɃS:b-ZMS ujZyd½.j5j7>K`OZDg[0ũܝxL IMD {NbKX@,oOs̲6]e$ =Z[u/aj9^'k;pȍӳq|9}KI1GiDOc4u#/Kjs)Uˉ{>JWWRX~CY9ftπmpM>y~~ $ A5TQc`決{{o Nn;GQkUV @jqVO=1⊅ ޒ#[GgCu }TƵY4?R}O]aju YOM)ѼZ\ݣnרU2۪%~f)U-wywe||!.Kv]r"8n `k=SwַM7ms7YӤݗI 9izA4ibM2̻s%f*=F̭'Djy=gj|[gTnL-EE!=ulne]Z?ulޤ rPpo´7 }@+m:祋.-z;4hPb'0ռ<^ w:ϫ˵/.aUo}1z*Nԅ*S[L-,Wb}ca8(|~V!hWl0> ZZ'%{?z/Z ُCpD֜.2C<#b}/vޞSY~ Wvq%[kzcsNyǿ\fV=#*?#*&\՟z[I^HŬKs+__g+kҳ+Xk>\JYQeՍVpR@l&קFpgj㓓u7f8a{[@:GM#pȔp/6O? #uWqKU}ia|(j ~ǶnǼ_<|@xXy)Ǟ +ʮ<+=9)]f+~z pIbl6ҔSJFYYH1+@ASyB]P]FV!i6L@¢ztԛ1Po0F"}c؁BYZ#V5jcaX[Ve2jO@iNcb2*;Mν9<7& 6EKZ826o##;JeqH&*ք85QDiwl3a §̜K1H~ :21'` EU _jad 8kf4i֩Xtsr59[\4(x2y, &Ih:|uiXjA9Soĥ ྩT2ͬЕ:Gr3)`S ;hZe]zXkUkl7CF:rPGp_ Ԁy: v;iFԂ} ]":8ҌJ5堠2V P?Ѐ^Atz&dq/` JճG[M焛9V.nd̢t ǂw[/x2hf|cwv1ŗ[5ǪBGo -3Vȿlbq4Jx1L6E\H&c?:2P3B@)=ցvg؏6Fr!zMق1[{;>8cKB1FJ9qeCJ1 :$ļ""'#fe?4Wc6VHOk"S^g l@ètd<y %gfNqSQ:#x LƲaZ4\h%@uV2=Jk}ʎ;$2OicQ㍝@8K(6:t W1c 0hRso6\4փwh֍ˣ_Ds=T? 0 cMl6mҼnlVO}#a"JʓA.5<֙Q$Cw wJeS/ fvXu wPf gO j :H+($Rs(kPk=$~LWy ³%D[U9lV9`~py+X4-f{AeTIL2mFWH 0O[+,\v"*kj-ףS7`_`1P@@٦K¬ aL3IDY AONX"%y -LIjl>Rh @59;^ȢKij^1 J *Ja,if56V0s@VB91( v&3#Lj`5!Fsaݕ)5Jt5!s)GW*7W ^O7(%ϓj_x~YJɽd< }c(M.7iJUEeE&. Gd p-dsC9~.\ :jz4֯hI^,z5@۠ -Sh@qѳi]L>G^H^MX7ocbf z\uszQ)%t? rC\|oɢaV8Dponkq! AqRDhoGRuQQ7㊚'Jfp d7+KJ@55:bgs2uC7\OCg z7Ʉf$Z-@ZWYm~X| T>4yxwPap ԑaRA;~y n 0 ||5^-|v•^V痯^DI>RFd%__x]ﴺ+c?^GR'<\R:HRKQSDa\{y-1Όen,m {~? C36/ ~J(ᐲE>!)q(vd4{j~]]#urNɑ:9R'GH#urNɑ:9R'GH#urNɑ:9R'GH#urNyH'Fsb9L+8? jgA .)VrbZzE@%'9[,ڜz 4i)WNJǣ l@ JEGz`Y^]- +mpwE BZ<ţX5$z@q [c"R*?V8gaWXW5=g&Ojhà뿵E. n`g ܗ]9|Č<&g+"PEo/K_9ZXKoC:Y˖{Wz\+kŦ棟t咒駜3  _<czܧ v\u]QJ}ss[ȌJ@J8^az\uIXݤH-(K?|\&z=11^\fIuuwȝ#wbl&aX$z@s1{[O4кEzw0rO6-AͪChY@'=/) <畖a2o~w!|CxS5l:q"afWgS}9yݝ/Bbk_xB]}^فj Nߒ-Mk[}WHQw (rj7OgL 2cZ):UBZTrڷpZΦcf˖+7*%~\/X?mŀt~1d6+ !2 WDA[g#kRx)O-#.n5ߘѠ>@}EEy cXCwtll6\ h%~]y8Hdufˏ7 4^3d47jA/W{kþ%};tkq0wkg 8sx=-t5m>,u;a~{(NNp+NWHo\ו'Bi r6{s0Pd8Eq3r|v"ׅE :`^xdpzt d# ,Hj#q\xO%J[r20/Φ8N9/*_.){BHs Ȍ3Y;6zmL܄hFXsjРxl$EIo' `3NK'y3fx}3]2[@-6^[Olt$$Pm?j ')=Q!Z \}gT FgOϞٝ=3:{iOQ8|STeFSG(%""ΤFk. k1UvMH8 >j5;T팜2,wsov3Wanq\Ghۗϝ+a{GI=2x>]+Ή4O!i6H 1:cC^ CS5w |t~vjۮ9y9`,=wAY8j 6#1UkD*e2aLuԲjrF!3r sPTG7nv$R.,ci儶^bj Q1faiXVE9T4 #]CƐ7ĐFa8F_0D!TPHJ_;L)<dHjr 2G)vSgv^ BB;Go U=Y;4acDz'-à:_Gv<\Ï`ozʴf}hjBp*orޞHV\W#lU-l:*qwqkZU'Q4h$Jv\VсA>b N8Z3u &~=_K :o$VG(ȠP 9a]0[`j mpΉő"M `)3>Dw B9_Ft|/1FdNMY8g9IQ5ٟ Yv{W*or;f>ѹO1z7w]$qAZT`B‚UpGPPN.3%)A=!:BI #zI"ѽnů 7X2A4(*L^F:D 9, KpnP*=;%j a*B g6GwWYMm5&I*C&ֿ9|wslzn9> [D_[9=/>a ySUDbYUwiNSYwC iAv!$U2Z%W$3&u6jT%ۏ-xI{zk5 0"h^#/\U?HS~+7ϳ~6#ʓc=9M㨔d e+,=PU?F”_u.5\L4X-,<ʄ{OKkt"mo[/kpA^ ˻ Zu‰^FT}j՝h|00^] 3MeW8&Ae>=l 54^__/zαWK2Y_X77yD64r: /P&ĔYZWA \M+U*/&ָ0Wg `1>UA"ѧ( %Dʙb!*eK]P*L[Ay=|:Ո찣mӰ^K۬¯iy~naF2nqTFRjtF[+g q'̲D&QRlscXܻXN+l/<6F޳nqrkN'd?=>o0-'Pǂ3 `S}F 9[}/5oR6a\&v_TVdEfuռaug^zl.-e. OxNdu(= 'J*wduknuۑ9L?w)naH鼪K Ad[_CƪP^iO #-H*V2Vś\kW(b w2j8PnT#qh*\`룲%HMF{zĿS}u1"lYT9Y8e={W9zk[{ՊmW4b^돧iv uä @cDY#/Ů`QJÊ\.FxK^2PTYa$fA\ &,2Z(9W1#B6-뵎-J7ku"PKۋۓͧ7RﶫY۪IL~+oZ pU !;:4K tF0'e}XD@K,q7E0mlcBe^hܠnnVnW"bRKdz WspX>sy9Olכ:.gsZ㝷_8ɯ_^[^RܟaN3,rtbIK>v6~agj%nm;T]f;'HMnv;|6[f#)AR{n)Y)f0;gfX=^AQʻ{frOmNsؙJ'4<1InOEHDZitmaVʚMTYuǁAy`ųy[UieNDtdj:Ty´28Nx;8`o˷/~`š\Niwx`k YgPsSB݅bD db0^Xi`B9 f+ph,rQ0vært%~DZ9| a pHGj3p{ d'2<=_k *¨.:QNT .0eq2d3E`,L\T\QN#0fe0ZꘓjXa%yQ'gg *\9ow$aSIq{ | ʫWN0L+sa> Fa88;/ږ۶-9a\_)OhѸC?zZ^GE%1bHu1fbʃ-L!G㜿c+F4 ?'%-4+sp1zrWJå+YUjU]1a/JH?#`.'?'6>Q2ID; { ?cCwrb0/O^zu˷ӓNN߾Vo|VA hcmm~FѪyTZ|rYU(2U1@ G`jcys3M>z7g("]\_0/Cdo)G}T@e/\N! V𪽪EƳ -utzS]Ŕ0Di6Qcb<;kMe_V|K fB=O@gQ+24cUOzFsYI"V+<$,BЖ3lR c'gHh|f{w%.ᩭZ^iFdLb)3OF# hk#NK2)ah$*"SR56xЊ>x_dZۭ.uѱE2X;{wۼgmd1y6/mfL ,prTu(gE֥2 (Rc}qQ>ʹ Q꘡5m%vM,{݋읖ւfRJ0 3ƉJB3w#.z9\tյiCm]XlpiL blLsRA14p%%@3&S`THf1, f؁Ys{oHp9a}qk.k_S_Pj')v)Nb[A$#D8ig,XdO5۠/jhrL9وhts8P+Ue"mw22X$x| RܒDo5cŀ4'ir|VoK46g_oy.̽S/<[1%@Swq}偮2orʂk_&^;b]4Yc #O$32qsQ╓xʯh^W2|z2x $'?N;IKy0ј|Uz9f+{g7shY*E@pڭ W,IRM ty[ّL-kbٿ\DM Ł9 GqGB,#8jA"-E{:[;rpIٌDY ){Vk/m_3u~̇yF}xg(J21Q:r;K5w)Ћuډe 8rM.8>w7|bH\1T*׽=][$%X6rNL(Gy` *kY t(U݃/|e&SZ_R7V8rQ}D4iI%zhPZPA_Shݩݭmԫv1XWs͸]P~g{q=͊a=bI]`ZmZ+PxQ*D{nQ>pdvLoj.755Dl-*s6h{γLn'/4xN&e<07JAm YC4Q8';?kq Fp(G@()) \PYtaIC*:jҬV[QZFa/YxT(6g* 6 s̔D*e Ty,SENhEeΓ;|mEP/Vv&nJBBiTBȍEk|rM(o X4xu%iC @L$ڿGnAnL U03P^o4kI,htg) (|_4vzt@fVNӘ'Sqҝɇ~˲-K[wEQL+Sl';:3+Y8-EY8#J",{Ss9]`CW.]VUD' ~Bt%Vzqj& ;VW8RvѕAWu ]`r)p ]Zq*=+"${WPst9SW"J{CSL:DWwU.]V"J{CbQ:DWx3ApUg|Wv({utť¢Kv\Nvpig|W-o(jJ(Ly|W FTt"Zz(uo #]IE+Lqw\nw|W6R,+ ]EUDYR=]]i9*V3tV"JO]/&^]Ю][U=VЕAWukQ*V3tpţƵmDWV*JzC"Z3D;DW条+5m#hm;]JAQOW{HW T 3c01#ZzcPr>G\"!j/ԃhW*WW&=]#]I譱r}5~(djF.ZBPLBC s7x7=Ocʔneb+vQZ dt4\Ԩ/*FJy3i,s6I1Ej'>MG6r,~\^_4_fh-w>Nۤ^\-zKˢ*yI4JhΩsn+s[M: LNޤWt&3&dН$Jr<5F݉Kݹ]C,zYm<"2s)Ʀk t 奤&v+;h5"m7#J{Ė3% fpwx="Jч#])`+%-wf#t(%jJcEq"`;CWΨ><$1tJr`Įzp6mtjЕjݮDŽj;DW0DCW.C]VUDٶ=]= ]"+ ]EDu"ZNWHԶf{za:DWp9 ]Et()jQ*]`%Dg*,a=ttũt¬3thj;]E}+A1(BwIe(9jJ2BXt"\̺BW-UmWW{IW LD*QW ]ZZQ>aJ#J'> Ob<Ɖ~\/J;Hb??ϏG?9jz8a>h01|C(x%y`v+\ >FUS,θ"Zz鰬^!sX횮u0~-(JҲ\]鞮z9ECtsp ]E"JzC"\! wG]E>g]VeNqQEK;# &v|q1A@A0_~:zތ gONC7*I=\&k>NYAIu`@S_mTeU[WѼJ߫oqm5Οdy%9b暖2д?ʧ ϴYV^}lG 9 O?\EujY `Ro~w$! )Ku3z50yYAW.gh,U"{ENVK]ҢK1U8_DTyWx|>hEK 6Y ;K'bX76aM@}{u'lN'4ZqQQq K3.gLq(]j .Yn;N)>|xߨ5jb0#"W)b`!r]Z߇9ǦKOJyE|B=XesUz'kT$ӥ-=%k*hhXT+!b[Z{ߢמg\F=UTdjfւ~?|&I Idj+ʒ=1Dz ,wH15J,tzݤZP9l.-g\MYvjPiH0`f-<2UY'ߚ~ѶAK _.8(&,웳`i7˞pכ u,/RPoMgll[H}A#YV YО[,Y{{ͿH~܇Bn~{RER6#s6kk|Ƭ}f[$gBB,,vFӼ%g{4޷kzQF&]PubV,=_ a^;u6kEOۋMŮRԅD!/ gŹ5Bq2bd]fܛBKVO}N1m*B]fwuKW_\llqA%θبcŋ䵋.LQV`M3BܙP2UƋ+&U +Fʕά5A/IW+k$.YW X P;Vɪ UnpEbEO&1qUL0c%;>'w᪠ZU9Uj9,J%g1 pQ ;RWk+짔Kf\ܲg =Ҷ:P%g5WR MCJf-{[BjiY ĕRT`h!V3Ԯ!ęvT+Lq*U #&ܺd]`a3wuvhS5ĕѸC!grg]Zɪ+Tjgpqe0:+[:Պ Uw+`M.ߺ*$в][ j^RqlSCD.&W5\ZYq*qb+-֚8+[v+Pk:Pf5W(u- r Ѫ U*Uj q%,8+̸pW(W*WpjMAP q#t6YGYoC^S! Xc3 '(T֥EŘ ;UP5e>Uڧ)No,_dǓ6̋U{0+(]ӏ#j~[$xq%88&[W97o$Hc?OpMC.M;Mbq6ZlG%6n gŨHR2MN QVjiC<,.n{V?݂|M%>|oԉcr1z #C UˊJb+7sFs$iu[na/8|k/BVff ~Ђh4H[[Qg '9zIk+a$j4ŀ զ-( #VĸPVl'h pk)n+/9hזI>?ɓ%jEvo?Eߟ<'ϛO^6n>yOcL܄˔@|6>s4_]AB:s,Y v`ЏS!Y=_٠@wNr!g<@ҟ%,q: WY'5h y1M%/|Τ0F:w:#o2I]=>׶'Ys\^ΤBC 8-YM  hU9TٓqMVǾȚwUC`5 uz.+4e?Ni -ٟ˦POJ_41?7 (neO]]^W/9Ab{KpCcgx6}hOfޜfxⲱ?`x1'3ڦo8n_I5_޼;;ӽv^M~8f&ůݸ_])~@6Oi# u 7k炊7qnbCqYq {q܍isM׸ 6}z=鎮輦ixz~87 J<_zң{ݓwqMFa83\eW 1ectOܦOn\Wo$~-]^3@7Ёz737 o5ǭIa2.0۽԰7q8;gL3l8KMAe>C7 a ?7Jh8UbUtΧE?:CecnAgw1Hn;^~uRʞ2s􍮾C ;Hz ~?ΌƘT$ hu{@jh?_]D)ng`=8s`odBq'_(!D/*IQL~8'(ƱQ{i\%qYlφx\ ʮs:  pF`gtw8ALuu?'GG|2gκ4mʯ`2Q.?]a)@5&w\4?#n>Le)8c`~GǧFQv(a PPK_(K禷; |iR3F⸟syá@!4p$X5ԟShiN4cf80*GBU6>ts28ǥn[.d`T) &m#Thᖖ-NKJúUԬky/ "4R&geU-8:ͦ4oA%akd_43 Ռ[{s]7X̱!LIտ`%*rOFcE;Yf}~tt,`a''_ɘY][w5lԡqv5YlA(ګ)&8犯9؝#={9TwpS4pN2n5a7ל@dq"+tsa|uφA_&PU,qfWn>izͮ&T_fqת׭}Ŕ3n/t+?YژoC&}aڡo薌Tdb8&Qه2p6arvԚd f(fmM`").idH)R iqt0gAMm$O~:QZIVҴxĩ? vgsښ y7^A' =ǝÏv_~<|jLomnDv{ǃ>}|ŝ{P"ؾj.R[a' ̙ٲS" I6ښuh|rgI>-'%*]5CSϛafGc]S|~GxvIz2e_ca_gL!7 ANƣNNݚn1*n=;LɮM4L#Cj@^ߔhfZA7\^;2]0`\>0f=`Ϭ|s9u)zʹr+:l_-w%gY~#{ƭN|;ܙ7)o4xd|v?u/uEUeqxb: ה[Qu&[+g<;\I+;Z&Tyd wQV PWHlr3|ZC*+m.mC!;+k@- Ur^j qe Z8+l;|\ @%?BU2YX&!qUH0ePWL"⪘Z[ҙT2VD\W=%V@)Lbr9wWV TZj\!il39c]ڇ>bu\ʪ ָz\q*9 \QBbj:PeՆk\= 5Yp! *B]ewQP5WC3vrrWV^WWk+Ÿ !\`% VfTJbk\!R8+jGƙ+PkHǮPe=32L++,3Bʙ VW޺ոZG\Yf-ui;4ԞɵrBzgJJj\}7 $ RBbK_wUHd%\L.%B*g[++YWղEO8+̨tW( Wpj:P05WkôCB =3XL%UUָZC\qAsɺ(gprqQUԺJsW 1P.uƺBcWRWk+)$5.YW Lܲg =O:P%ĕD_ͲG;'Y{iЏZYV*9GQ^ӳ@g/_ݩ>$utO}؉O~~/[-ӖadWn.KODpvӟ"1|n~كIXVnM[_&7H)k+#5].WaQ/d;!(M`e +H[,:%|r^L)~iROoyee] s7+*ʈ୔ʑk$e;uTx)ˇ_cf8|CQPcU8 믁^6IE6y9mq@TTrRF' T8-԰z\+Yƥk j56 YrwLkvIkgZzD5pn '^w$j 2Q,9Г?f@pa~_$'t6q 'ӗ/n>Vqj̉ɛo^;In~BԈ5L3dZ[VZ~QWz~t(WhۚãK`ͨߞIo/]g8<[V;I;;I}r|Kɳ78AN=}twOZѰ5(%awees,zze BIuSP p)@7c;Wr1{ .8H."Dz=ʓsr1~x<.gQ1Y# x7e]: ytUTyW))Uۭy`2e@3"Rr2S xc~,xZ`dxZ԰FxwPq;,:,ͼxWѪ'6]!MR/{?H0sXvЛO ^8e )(7V 3"$ʘA\2NXz AAZ]MS~7fd``յ%W^" T]MzJweοB3yP̻S%5zպc:(-+laWaқZ&N4 u#4mqo dCֶ :]ZHG1u|@rKK[++Mr,$EH*‚]{Het#:ڳLkfgJ1FҜ̕v(]Sbt:λRV̲E*c Sӻz ʠLyc`ͨ&rWcW\ Ux4Ku vr`_۲r64t.Z OQ谀bQÎF,Q{kUsy3~qTQj!Te;e}LX=pXj*n..6hx+j{Ev_gyY>{6 oE+Ð,f*OPHw31zw2}n7FM.OO ;Ubݳ`X˴W=F'1D JZP7#F"2AX4?#4`mBGL5B>qakcZk9`2ShL6UZb4iC`-N([ʷ g<}/1x/xH'AkPX(4>Fp!]"0K6 ? `8e C a&`1Rԃљ4,bkO|Q|qtu7h*B4@8uX J,3Ny 0Ep;hyCncx.Wl!=os7K$ owS?$%|ߝ& Y,2';pxLfh:C2x Kabxb{o{nj-:AmT) ^sOZK$W0&8C 3oEF)>C4Q|?TtUٶ6\[ƻUpm9սwQE/37mjc,6p+c1|Oͣ2pتT2GKc!詖XzcXf!Kr'm!^h嫋MYN k ze 0V32ˤ`ig8!;:\toύZ|o}{mW"],)L8 bmT1바AR50A2m$|LH$hox(dS؁[YJ/$P˝a,}q)ds7/sS|95 ]$#0-Gĩ 4pIJfUe(I(| 6s5/j3|9nh*x :,R{'pPg0;mg PnAh5ŀ4i~٠^oK46}zl"?da?0"ѕKm۠xUEk9hphZTP, %"0;,!,_M4:pGpt3)ɰv śio ;TP4!/Źy:1Bג"/;[^,s =D}zn稿"PHf22-)(`cS.< S$Ziǵc􀉍`Wlr=[tȍWbVx}N8Uq}:f~Os,@Awr,! NmހF< a{ų83DW[;nv8=m.sx+wY&BHejČӅ 0_M9>WLds޵?0L $gb9&QÏ|AL\+@% .EBb@wdo&JqZ4Nve| ;AZȨpA@9g#)Aj 2Q,H T!A<=s:w{=s1pGrKvV?orv;uƢET0y*t 5T0>#:yy푙 ?tq;q/?a:ȣ3=z5 c)/-]^oHM~7qacvJ!vΦֹdK,. Tѧ軒h۟y'.bܻN,HjXn7[0q")h֯R$٤kO>ݤ;ɰ7*HN~;0"@gZɕ(ᝉ8a/xa/HԚVS$~Dw/QlIIHJ5ӭHBe$2fnl?oǫލokv5NBܨVnh; 0G~)XXR \g]DLȃ۶+[_['10o)8AfX`Mo&<ύߍs u]b[oYE?>T0Z_2PeG5:M/(ٽ[ +cۥ' fF<<^ggmXlLmNϝBGXn1rޛƈAn~%O#Z赶GZ*&F6"-TdуʰZ/*&) %R0Ƚ2FE* 2-W`O.Bmm]R=g;A%8aӮ|M#! ;::<*<5o5zPNO׌<8d;D4F5'Bi4#7qڂd" җČPJ%O g#qD'|<ǣ6aq18eT٩L:ghSkGq9r{<<:nM'] (lP,U,lYo=?%E3Lժ}o7Ҝ,MG׋Sn|ۀzHE8%1YuCMg}dmYˋG˺(_ݘ$\hŚ%_5Ӿ5_7 C{'~_eÈ׆9=~xHNѼGuEjE `Tr-*^(hd(3^PD@ޝ~}2zGQGSh.],3Y(69=fajdRƹTV^;+ҵH#J*Y6gD*Eǀ i^s3cBV9QNJYm!JމRLAOe)J<}QAO<-== by'I4AA|FXRnhZ簿Y WglYC C ͯ/? 3ԢL 3QbGcά#K! 3aeSoO>(K>_ӽ/]ŔҋKٵIM>hG{EUOϵ⶿2wֳܺyݗM\tf՝9^Zʏ_Yɕk--?h{..‰QD߮`]ϒ1/ۼ ʐLB|7 }n0^:utI$:6v^#E!EX)UMܓxIDU/k*|fK8#/Eop%$o1Q9%b C~5> et AN'kZI#R\JٓRˬBq (bh)D6HܢJq%f9 Zp>:\2˯0~]2mJREGZD{|Rdط,>L$lү;IL])jÏ"?˻_ݧ):a{(N&8jaEf#M'L^̠( ) 1$#1e`А)N"WKΙSMIN(kPP0 `Nf*˂lcl֝3'MQ (Pa{=/ $aK/12bgƘ5u)iP}05B+ /X)ߧUvg/%ePN O* 9s?XAV o>QBit4QMFN/Fui0 837ac-5IiSY,Z-DzIPZTRf0Nsoy01, gZx݁ӄuKjȰCճC0gT$vs)]{ԋD^:=EY|Co'{Jʵ]Nvԡ11 YR:V;rkĜU-Sb϶ EgDҎ*"_Vu瀄Y M3_%k q1Bs]M`!*ElP\! )?F̥H%ل6eA@hNwvĩKDMeUH |Iǀ-jS(")%5z/9$-SF_(N)]Tt0E0bgYeDn Vͺs X{2~w۫]oQ`l]0E@T(hL=A} 0F 9",kw0h K- h,A&WhRQdrBpMtS޼B+G/&{72GyD(H OOTJFHtz2`o&V/]IQr$R) h2grd]̾BˮnG.PzכIIJТOʂ0-\aSY}R stUiof(Wڮ2)7y]q4U1`Gd<1Bjqͤ^Ϧz=JrF{V2;ʠG_WKsa\,/((H/"Ge;%vH$lr PMTSH3ƁbBWg%KJ(AU1JcFBtBTÖ*4`y P LC=Ta:No&w}>a{b,@/6k=9$FmxͲ:!ri(hjjBg2JėQyT-TWJ<ż4ODV)ߪdr&}aDBRXU:tt֤)G% J yP}/7]i}0.;{3:h"9t)5_谷)`ٞ~4AER6ãEP LB@w|z eiZv$9ss>q z{-Ӷ.B;( CPfa:`iPL!uP" f\V+m|njDRp{K@f+5MuZI#qP77n³:ms}{2B'k=Q纶:Fe0:6>O>q~Y' =G&]X-3d󻹠6FmeV}5#~@t~iˬ [C1g=I=.cdw_I -.i0&j2Xיta'#EThVvkq+>_cMq.Io[d;[錴B5{;W"BUP^ο[Qq=JDz<ۅ O|ޚ _"+szUrBR1i̚qC F*tn֒12d<7%"JkSEr奟?;%zɟc(\K@[V:Y lAS.(jy\Dv rZ#UNj4sM#~v??,wĔU>zO2) **:IBOg_091]$|l6I/}u*Gg0.!-V٣rS-[`H'%w:0x(#'k]*I?{DnFt|gVb J¬V+TOD{ γv}\W]ۂ r@p2R )+qT1Q%ϴNasZjbnVG[e27#7yfxLBlMyZO5& MxϕBfƱkCLՙh-+p0\:DžB)4NTI#C0zߥ IfKZ2 EY|؆h7h#+ƹ IbcQyA"h x֚B%ƀ:r*$|z\+ AHVxBB + 5C f-7(0.׃E>8!Q;DrQ/0 5Vc!o@ipΤek `+ʶ1A݈"-s0feNU!̔2DhZNʚr%XPFF+VCJ䡁Gǿp U=(!Z^"6 ժV@$%FK`9*gt:DH ,=d,":[%n<(T/!:5i6(U@u?ː/Tq8f'&(s &%+bo,@@r*09L]1 ]EѴD[yO9$Gf3,Qs/ڀ@tFSE%5*tʯD4 peD"*L_bՎ&C% .Vs"YJfY`MU)hE6e(4gG+D$YpVЬHS{`T A(sPEM1!]]Bg#(2hM##F$#k\[=id ːN@:$#F(Qʰ/ m4[ AϑJ,* !% gr'Jʺ)& Ys]nCa0P g qD(o0>]A^怼e4j4Qj v]E5F!ZcyP4*\IgԺ]?o7VZIjSCb6tk#Sf9Fl]) mk!]I=T}$ Iɬ֎U9f-K 9Vk9J] kjE2Q;$N? )ՙxw~OmyK w7%[qTv0Gl~;'nzl4ݜ{(玬hIWdgy/>?$>[| dps9ɧSWus?xCl?]WVjŧb^+睫f СFjȦz6;ʓ^ru13zJ} ~Rkmx8=֣ɛi_1?mMf҆[;w؊&'&x;{^8x|΋'/w8 SxFoIcNG|'ް?jNrzGvpԗO^x|gq|7}$4{)0kl49y/ ;|{Gi梛Ҵ9\Ӈ+>6,?-_'mO'GgI:''t8_>vXuyB Ώc |y~LOINʪS~M>xG35N_FV+oOV_vltlpM_ ҭc|i27Mwӝc-d1]:=S:Mg)$ 'tɒ. `8}޹I3MMW:Q j?$F}zC }j.O7=>A!LexovYw0z3ʗǃѠ>fh~>P#wp؍ 0/'wȁoRigqs+9+ٻh4;ygkL`0;g诓xdvo?ߦ0[hH͏??`n?Cs/Jzw? !6Ls1okIѳћIw}( A:-^r%gn?iHدesT-s~FI7D^ U7yt7NvbwqSB 5\@/δsi$ qYg7} 0;Bb3 1.B[ ~Ra 6*w\Q9? N7N4_=;xɣ/^<6XhQT7\(b>@7o'$lۤKZIws`mo;~to?rsbru[.ht\{ˏdGhcO;{/vv_}TZJqݯmwuJሒȇdDO]xEH}sv:N!ͥhC~&ྑ"@QaDS3Zy=.H_7sS+{~#*Nge]7<]g4n!]6M Ow&T \dkGwc.HCb!NK2u}wAQ {i4ZsT=H_NP_2]̒H!.a%/ι}1's>5 pwzz~p$%Vv ,w-sqe.[[~8[o^+_̞we`uיwOtȫ|ykL.]؟]2O4!~ͻL|>b?cZF2q]v;ĵSIsIhc[>YeI;@_eG, |UWW\Ѥ|X6ӹ\Eqq=^~\~P! zu'H}|JN^()6w@ٯ{iml-w| TN{3 ޏHˏ"̨N58)@Oa* ֠Z$ܓfR:*kRV⎴0qRAZAj3l.zVE,zsUѩ0r;a<6*ۣj)+FnKd2sV*5&W+GGj;( Dh5= j9TSVhY65@,LRץ\ˍ<\RU#zs WV4D9⺚Й r-v^4Q1!S4 asT-UN5NuT8 ԨVˇHv0UVu2mRtMt,-۶*&/:dꪉJ/'YRt<upVαm*@:Yc5ˀZ#XTU]ٙ**aE$kT 0uhVLO$.|6y}qj!(0Wzx4~0@rA;f4/|Z|mUsZRYK%Jj`j.2c.TǓ%3 IexˠS4K,Ij">~$f9zT+'j^;GƐӕb3K,0"`ev-bEn \%jl5dM a5ͨk-"?}y$VK '@U XTg+˧STLD[>_=~hɎ00Q@-S[X%Tb(@1б%$e)O]UD&&O' ǭ8 /H6%CqV=>I˸K~'y'I#UOs`x^_{[ aY"dbIT;|'(Zu0ktVȉ3NE۽6:#[6CǞ;D$y9D[GNךX{ZNڱX0&s(kl9H֘y>XNj\`eB ) U׾(R +STqNYGQWH1%ZXwuTjcgutԕ%I5l`}j ) ]6H|ϣSo숊]]-GM#Wdt8j9*"lk%ԕN(nuQuJƦ+V1]]!2 RW2h+Uuj榨+P]]!RU 4- njmYVA t^7wx>h8 e|iO3_v4'~+7m=~f^?g< Swu۲e8TPu۳זUd[R%aQO[< rEz?/tN]JBvqoJL2,wBӌϯhA\1?]pLCe1,SPML&Bu˔KW&"YW.$͝1 n{KR,ڸ?ӹL&mI#U({$'岰`!2lM6(Vo((*X P($mAB?zCEI]\Ehldd,QUUwiJ#jZ"8oh`c VoG[;7twj{d߅gYɫrA@_siX[v'k'.f,Ka-I9A7ݰTS t0e#/6s[|(6+M,D38o(7[?oN#_«q"wI!gg5Os/'S(G{C->{ -& &~vMr).(Ka8A9#&+2v̐w)2K埻] "0M/\ETp ΚLHdC< @ c9RRs\ۋB$#́U0in"0Ra߳$L$;,xtpԨ֊/FTkHpQX= ~_Zz vvw%៴'hw  i`0pgb"xs|/v1<ǂ_m!:Dx䧸.++r׸dlO_: nͽ |ǫڗqmMle2$M(f+| 3, efWm=}.(cp ,o_ո:HmQ` YcR[0D,.aG)kPSvp.kn(. r4/Y)ࢹ5TtS16$U5e̶4],Y=E4=eZ1GVpoY.\& 7~5<+AKפ^~z*'zS{:<bcR" ) Zx_>*Ì(ECҧA;V'ng*<kX! P|3b.kupJ5[1\4)L_G68)go)2! PtIu[HCZ AƵCJ<6SH.!4Z?V|zܦ[$ѥ;zhajrg8C~b-0q=L$`P^J|@M3;v DwKu :_wҌEqNu p{/ i\kT29;ySH`5CҤ?u@it :6`8'">0vPٹ8KRϯiC3~փ -,Ü|Jǵy_nӵV] z.Ap9>-#Dl%ĕ+ڌ᳅@RlY1`Fz%'|_1ZȐOL_/@w+0 B6uib4,,^۾e;~ޤ e _ArLlNRL,^5$/g ɲl^w='Xg Rƹf3A\-`ps*0: h<-4vK${g / xo881#}?q3bӴK:hJ5mf0K+EHxC7V]w`es6A:Dw=.\!3XS^-ov,˓|,rL @`] jQ2<`#1Jw]1}a_A*fV@}ôG6c4rv[%2%ߗՃ&G-܀\1(;w St6Ukcp;u'FYfxjxM?qA ovFA/`|O-ovOPؑĹNZN35{U^k}{ײ(O]Kٻ ͩj~Jv@pNpǃs7'>{OS~؝صʻ> KsF~Ἣ\Wx p|O.Fѧ] pxso`jI)U;(oC:}ZsipqnKˋ!,kO,`boP ئ?mGom1j jmh@eWe˶=~=3:^[c {Vp7 91[K/@ 9ҒDDp^E}mr,314d{Puْ3 f\FmKj躦#Kbdpɵ5uSsYdZdkjrwA,v8l8N f]U6*-1o30+[RmFs|vQX/w aW 0tk+}.UjI'4"Hźǧ6Ji~!qu9/Qq9"hҎxicmm;^>O?I(}teM"a"Gpcd{<>3K(%_4y9Pޙ0qd hVzU>Z9N%K5 h, я5-ÞoEΧ [?^d W/ƶN{\錄oqnq'eȻ)'±br y/Z@B (7`;G3.wdh^ 'P7iQe:k^_~7~!R\i+dU1|jkZ:%dPV'YqǬcu*:_XH{i#^F I+K~/uGoᦄ`\]#NS9F(xg"e\x\w R^ttRQjҘi,%]9-+Udt\u&+h]mjEi KW\զI gVs% q+BJϢ:+attJ22B\Ũ i5ގUu%+V2`W\%]! u6+eЊX2:cW=]! )꠮aui]!%DqЖF]=ՊyBB` q+L+6ꪃ2 dtJPjD]uQWz)) ]!Tt_^WH)TUuP;%TtLgieCHI̩{oVb3pݯKW(Cӕo+uisgWt(2B\ h S )꠮JHWl'+圊V+uE]IϬrtNh2B\I3+4uE])/`+]!lG}B.QWѕ3l0tN4:9&,y:*Q7W^X ҷt6ȧ2@oYRmx(0ceymGqn\ϮöS]!gdtU:*ZeBжz]yƔ3tZ:2B\M3&Vtq pϘzvy%>-i.AW-)UX\F]mJ7'T&e5,x鿯N'vxD|'y;>  uXxzư+grEȱ"1TNX":0IFW+8]!ҡ )뚣+ zaJr EEWH+}BJ墮:+š/ !+՚+uE]i.+ IEWHRjuA]tV:2A ( QWԕ> +ttL cwRUuSRg-#+ud3R򸐡B4 _W]!aTtV+t*芯τR,mlsgZW-)[[G]m\X$!]!dt3Jϔ'+悓JGEWHRvwJJnH)KFWK3~_3(C{g0Att,dh=]!%+]gQW]ԕFqGHWl +ĵֹ[W@魊ꢮ`p<鶇yvaTc%A)o }Ukl<ŃH-3.xvWgːVTJEj}2B]lΈ JtVлHitbwm FGW=]! ~yR. ZSZyWq9 l]ԕte2BZCRF]="]5^LyH+`vb U{zqlyh+uisvq趺`]!Tt"BJk:+҂XAFWk-]!ס ( QWԕڑ 犌w U"t]!ꢮJP;:A5QҊuD]uPW;:CJ<=P]!%C]ԕ1[EHWƨܕאDZһ'+6L{U{;}joG+gvޓѕ3q2+/d@ƙ-QWѕ\zCHW]!&+ҳG+f>03{KvjoE+؞ZW(y` d ]ɨM[B='+0Զ#]! )꠮<`+` ]!Ttօ+tuE]I뼣3 q+ӐRzuA])ǜtgB2cWH;ZQ:+mn]x`Gil.r}2酸y˱ZwC|4M:GIdteLhc!s`G|*D<-R`.6{Kv".6RJbtRЙoE\Mfi ]W@꠮iB`̷"TtV+t:ꪃrqIu^ Ff5W!.{9wyWq+@ē+uxtLzg^qljS+\=Z=]-U-t6MzgJ2B\{]WHEUu%g9!]!dtQ .BR uA])M%%]!ӺB\o h )++̈́4LB\C3}BJ[W]ԕahMHWl~zc;\A3ʅ+4QW]ԕLtZW+ȴVuE]uPWK!]stZW+PrRjuA]yn+&4Ԏ^RJCRʸrJ_=Q^h!_}ߏ'ٷ;Zd#WJIcd |O:^|wJ0~?@I;:,%xsO|a%r2Jt8|bÕZ/,irZk|euښ> fPc'e>RŶU2BdQ:f8k^1ڽlQއj >fx3翮`u59*Kȯq}:瑍Jnuz*$-crqrzVZYgXٴ}!?M?aUw:-<;'əT8AB Yh Ÿ3M+)2_T%6HfjJa+,5ZLy g&"I?QH˻c9ϦsA-c.)mK ]uOp6/6=~28;Og_^q}.LխW]ȃ'"u*R\T\ 0)iy U0^ M,U0蕧7Vc}/olXΠ?\c/0ng(/⦋x n)dU~]77*):|<n*:J0B&R\|wGHOy~RXL _YeYQ\k(e80RIGh M:Ze^B(֏kҵ"g GYb$`r{ӏ|6 t8RfY^f^湮rQZ9Sܛ49]̔Yih,ͭpg@eQ ˥ PƔB"u^2o)-$TYn* gҪyg\j%hN,c=2,ҲBKͼ̊L9VUdƲʊ[ ՗Sy.eleXbI%>KvbF#4+׊W5bp>|}T 0,fߧ'|Acun\kS\^_{{>A:8ȘQo1CW}(]~,\@cr4>W\\8 (t[LtZ/%7jlk-AN'DrԺ,AЏ(n㚁f`m>Ꝟ>2Og;lߧ{p&6TUA/j]GV8Wΐ.9I'i +l]<2$puWϏW'os4#5(ج+r9f~E/;ź5Bn\y{~Ƿ3ךP<b1\z꼜ʚ_^z{냅k΍=0*4\/gc~WԅRHJad(-7JQfnTc|nG8)K )&qua(\Mo8(\rfR$;y׻ѼEmK0Cs³@KBCW&`}$0M-.zg2jţVj頜Д(H)ZNP)0T0yCBs-"iE<>oH,yҼdܟna>/AEuk)'2\ȱĩ~⻬zEԿ*G|M駛 T 4-NV%2StE\e{'+;!Ocs[GO.B/C$;9;G?MF.`\e\uOv4r 51*j'H.$(*5MT"ё)c-q$( N"0p U4NLaRZt2ޅ`5P}2<G ʀ&cl'[gϵۄUTJ~Y(yWĐ/ U$^HO! N Bu2) R5zP!irB 4%= D4Y}sMzډR%٬8re!٘HAS06qIyotZ9<-#ipFw8ӱ65R*cm3p@Zy>\]Fǥ` E4'FYd)`;l="lAۍfvaGXäfݛ kyMWз,t WъB Ŗ֞ 1kK-E4TR pO|ړ:'խ=f4NUYD8g*)M'!)I:'I &:U-pcR1BCep)Wr0aHtJSo2k٩ښ8{$BXns71W'MlZc?):SS(5,oza*]JA-]ɉ$"ՎDQp\L+BM֢\`7ywJqI )#_kcE$.q/AHJb%&&2X&Ξ`U$;mʎ$0V Bk>j=c>!1H0rabHM Q{cWlRXW'5 P%"rQW掙`8ELZ9MMa,QO !u&|5Qu48avsA0 : \FyiW`&ӫiLl3eoh >e$pB_r9s_1[vtK7 K‡UcROK' 3Ml8d> V(l&BCMi_H9 hv֞e\ [q$&4HP>BSSYNLJvy5CZtsSWVkFu{ hV zF 0yA oYXE)FĨO: P5mpoGhcoFg>|im06Z"BVkJAIXOmTklDJa#+VF>x61CƬ&ρ:# uy';-8aɭa&plb!RBEk\:c27{`@=_uٮC2;aFЇ}Iq8taTn f|=}@}uo-wX?FGf :8l 5Su4frϾyX#ICcB5xtRB^0 I \oQ2[Л >׷HTO'꾊Obg9ܛ֜lYE ^>f~LCեvEǎaOq"01yz[*Ou# ;3cQnpےWO +rss+Q'\ynr oF'&_anJ\(Ӳ Yn< lٿ|c/0yq|oG% հ@ 㑟q{4z8@E6Y6Nks|K D䕐$h=-@Ah cKD s);1(+bVڃZzݾ^{8Gy t/|z Q"RMG % 1E4"!Hm DO;y/:uu(mrfW\pT'T.XI'N*d4u-–" #3-cQS.GÕkhnJ/̩d~!<Öj[߃LGPg-h7O9nU%4+j~M?'Foˢ uYR|iυ/%/'-׏3'22Mg_9L* $}T;Ѥ e_u;v&/DKB Q!VxKD;'Ǣ:|.\Za,:j9i$%E!\SbӗaRX_qo^t/p ׯ\ߊwKŵj ٝՍE}ͲAa^s>;-7mA2x7T'NpEaJ&IoI5]6UÚᤱU(`X`nLb u.{m/ecluM6+eE sK[pTEԯb8[\Cꡒ-U8&1{Xo0]ǿg߽wg׷g{F9{7g߾/M"0`~yܪ7Pijo^57b9mߧ^+7{[}p][y\za1+V $/M?o{~>'Kp"Sw#8eBz8 j~9Ky|KT߯K% c e )ij^9vj]1wC-dUS*_=4<b΁_mojv_5;*K1|!sb#iE(Ɇ8 Z%4b~nu7W)l f׭꺧έ:u+jƾ7`/1AME$E8q{DQ;ΰ!HEj,^:M8U{n2Z~ysxjsxG w}LW!e!$j0::=Q A3#_2dTVшN|.\~:Xo[C-HJ7Ǩ8&b^hgNtS9[yɒyEjWjXG<䄃 M!:kcsʍ rb)r>:gh ҖgMYg"^K s(%kq%[f#1s0⢥ͩ{(G\Gܶ;ICLiL. lLsRQ1M5rI  IÒP@6=U8 "I#sKG(aEt{04fC񥕲PxI'^q D _)ŝ&3NVf:`Ytj5P.նjr 9H@bP@!mDJ ca7 N1 ! 疔E&Э- E9G5.X{>Od6IQ2Y33YxSH% !3| !1oO$ɉ<-my~= őyX Oq 'Z!f 6:AP]GQlOK lt甭%_jGl}Bt#~woIF~(J,aŨD^MT8ꭈ ۸}uܮ$a}f)[p&g,{˯yRJ{ݿE <\Ks&/XM qbp)!^@w2ap2pAл6 0,0N[=GY8MD҃3ovmn; MJ?o FcPDY3 _&*Scf/\o8K񹲫ѽ]-[Ĵ@5 LTgEӥĤQl\+>GRb@wU!nV`l>փec^ZK+/+]۟BG?:g >Y! (g2 "RJaHađ@Um,3hKs0.pْ}ЭNgO,ZA%cB-)]IJ IҁKGMC`z!JV*Zت#5X$rέ& @%leO.dSU>Q4ޖ޷wsN:Wf87a^?uoi gLtkj3zF7D<\s8 5Wۛ*sՃl~U$׭6srkUk&G)ijN09@QlKh,4pqO~xAFq"p(GH( + >*ۊNv<0 HMj]y+I>ye<TδH2D{$@{~ ?Gh](zQMz3| ߍctO ft܇Wpxv3Gl_~BHjNFօe+,#d+̙WWp%yu1i}-@smALmm|z$yjM>}}r 9ͽCfVӇVi}EQ~ly/@#4\39A=IN:=W:ū8n36ɎW=4edU#2\a0YoaeOK`w.ntnǮOW2}S?,]fN:KepY2a氬oo l[YvY-s޿J9-ܱ1U j^pj k[\sdP߭g1[ c5ǘ>H,Z"p  WI: X-I2LDbpX5"2* CqڨRS RAa&+؜ SYӝN1*v俛K${0ۄZ~tXPN0U5Von-Xà?5Vzn~<ձr Bg'/"ilk.m<5}if G}":v&~3Oq-nVe^;)'3ab}o QkF"7ZMI r̥B 5gr@1D&x* ]ͥ1tGuJB9$C ƑZmacGk

x4g ,ڌl[Y,oy5⢛x_s^|oCX(3z%e%4CE9gvr$\Ue"_ajBWivy8䩉iIHZy<e<؟4Y)i2eSׂɿ  iA3T.eaŻb~C̭ އٍ)PEU+@O^v˶zֶsIz?Giצ7mnb.2ʒ_]҇bH}13Yށ -Z1~9BUѺQ^[*AKTԺb4:La#`.̾Ģ@~"W/,x(|C?ʇuzun;]~ww/7{wݛ0" Z`:لv3`E&ES6(&k]5&&\S]vZn1qQp x3tIb, N=᪓Y'`62OngM-V}lGLr|! F𺵪r-u|6])bnJAMXy?ƌ⋇& &)V:ߩ̙H1 0w(Bʊ^c?7߀471)Tt"0iD!6HK'I ֵ1cM@/cO}䮏J35,cR@FG' `!h#NK0!C)ÅkCgm zh9E5IVɵTDLIRP8̵NKc<_Z)[ 5Oj{(p@( Ri"{8$"U4d5D8F!pɅ' zN$E99RAlz6\l%_uUV_zl{@\g >[4ߘI3a`$Q|) . 4$`΢,A`+6^##@vFע |6(^o(}6P~kx\xGW;(b/ fgW?pRp\G3LyҥVcԳy?^![ 1sr3k.7{OyR;[S⴮[o Í*Ҝje{r֨w3ꐛx!?G,-,0̪ ͪm:y-glPdKPgOÛ /j^:3ӛh`~/4{$ĸ]Z$f9}?8fT~`gzk.nfC|n~50uȆRR%SZl5v (v . * !fp_.>^>h|InѬYiIkquң~j:#=/f5aG0$@0*D]uU N%ʽxt 6(כ(^uNo7P/*.fF׋8L^_C`әfϮ눶W߼7!~ZXn#q4DD;wY.rҋưђuMZ 5?gZ\^·K*-rqmueb0LĪ2~Zw]Z&ad іWK,ekbY;4 ey8^ƒ~?Rɨ/47Q#LV'Ŵ"ƨQA >$f"f].;{.3I`u-%6G9sŌJ(7m)k3B^nG)๶HC0.8 p<*|$ @]HN't:ydjR W-ci]vؾg3(N8߆hhBX50* 'e 㶧9vu ->n >WOzeh. ۞m&BR>QM||M|%OThio\ +,ù,lYܺ|]%E=3L-lQ8oX|UAh +?Tsdꐈb U*Cu^i5q>ȋ.Gw(.'("A`i$N0 g^]2nkm8=~KVx޼Ñu#ilЀ;b:݆sRkEdڐ%D%RΑwk]L>oׯ'fљe{*($RAF"Y(1[^|v{ђYӗ]D汊hZGrKQHo9+{NY_Mcŵ~Kq/ )㝔04ѹuLր4)DP` fO"@O9iiZ֡~vu]g_vت5>᤽ U?uxG*& ,^&ϢV(T*܋d e@'LV:9={WW4|NՈ "c\IF*4U ) Fl Ľ3*:o gX?^s10&$Q$wp) j˗3#%X}iLg.gڋa;-QOK Qr.ݱo% af簿>/o׮{W0a] ŴD}jDr qtDPa]{W(>oz'>yx(uxM_>\3ْg+~[wڴI;ͭaGl~6 ߧKm2WXUS{;GnpխYp/p6Zv[޻]g&]ría|u~BNv+svn ~UkJtw@>М?wH|Wgֻf!N-G~Ea1dB8ғH$(pmAzyBQcP` TH@~< mkaz5^>VY#ĂS1b;PlH$9{a~mjѥhF08{NX l ʐ,Am!RV 19n 2GKx (hN,(`zl="lGwy{1c.f~aw-w6Ml;Twł;T8BerNJ$*C>pB8ޓ:'st{OT0FU g:NqT"RhO8ps٤<d$eQܘY5ʜbJ [zРK\Q#H aJSg2+lT-֜hrn09ƕ2PKl%`6(ӃBv Qf)vA9O$30,`MLyיOA`1g(. xlo~x;hٕ9GHHA @V!įEg#Y4YAQʧcúiAyǵN1M$ b$%%&&0XkNGǾdU'Ɏo=hU 5^Y41(3/&v&9*)r*G0I=C!30Du"ؤ p:J/_*a ;sixC"(e L&? Ba!^mB߫gauL :2BF9iGEw7G))bS޽,dSws&i2҄/ą{Y>C.n7ӫʁ|sz{a$a&Kģ Ǖrm^S8t(HC6dBʉ+rfxl8_zė {;wP;׊!)FB,hR2G'3 PFЃD9XD9 # b.sQ{3$ð=&V*z}5ns'xO\څO,8ׯYmc꿃D5-` 燓 8Wp ޤt/j}V;oǫ.'?mY\ؿ~ylG|^+Ւ{⴮[4[eGxQa9 'kOw3\PG \^(-&3^YjM'/<&-x`H~:|l.՟PӒAsYɧ: /j^:gύӛh`~/41nPM>]<l.z=Ef'% ēT*b'hQcKlLBN7Q2LR xɃ(#ң#8F$gK_.nW7P/.f|4bL"~gPxLV$_)iNL)G6%&%4u#8!>EEI3gqk"CΙ(3N ͣ4äK9.u`)aZt# 1f.D&0FB> qLĥ0B V`VҡL5c@_Vv9Ŝz̾{uғt89_{Vպmnzū_O'Kxw{YWL3xY8URD1NE sN*=BuP_Γ:YT) ٻ޸WJՏꗀ~0`S`S@R =fOKܰvR3էN]aD#KsIElznN S LG _~5V.Xyg=Vݵ暦zS6vo\'_ 9j}6Z.Q"g*z7VZWl<t}NOOw/[:]kajnb3O>P}^B`h72u/ǭϑ)Kf\5d{0.#4ꞺK"]Իu%i8$ =t.A+ #lY|xU@i0hVa_8tUlѬ(5ەM6l)-J!ܞ9>n.MɮFK6rTh"c&ajjd׬˺}a@{<洷w!lqՐqx v(,ėJ#s-&jF5`2͚FׂI_2!RoѤM|D|MiHN-GhA+Usý3jr!|$R~Јnhkls1uA=R!Kj8Tgc\TTuh6R污QO޿ẋ@ brGvR)&էy,GDf'ݺÔm%f>yk fKoCкN%VxԶSЪm_ YEָGz?=?M|d,(ftJ+GT<&̈:]D&PA~LյᜋTKv~(]cun}0ᅜ=[_QG' O~{tEY|h¾B)WD-Ÿ`,B!f/L4 P>)*4h *Dkgeқf=;\OjP_ndlܛ* DoMHR#Rac mGK#TU4JVVP"?&U3JkQ$ f(cϧgcλɋEf@(6%>.)Vtٴ.N?PGPy5 V٨e1qYC)`o@ ~Ka\|o ɤA@y3"7ZZpXt(G8)c"7XGbQ7&kRxa v⚱VCP*}m)$S]7F4T 8KȈ)~ޚܬDV 1(Oٸ``;00 u!Y%A3c7:DQQV"AO&6 %Tټe*0m 3Fn3 pjUӠBTc)`HXOV Y+ KGV\0mlLxO( ĸH.^FdBfuoκJQ~4i}ԢPH,#:!BՄؑrB0bKw,A+=+qn\"o{qrՌ[NNί[~e+T~ L:*9 z~?;ڿ<2 AuYQ7U_On Θ|SfzqINv rWqsLgpeΫtDl8uĩ#NqSG:8uĩ#NqSG:8uĩ#NqSG:8uĩ#NqSG:8uĩ#NqSG:8uĩ#NqSg:Zҍ~9NNJ:Щcy/NqSG:8uĩ#NqSG:8uĩ#NqSG:8uĩ#NqSG:8uĩ#NqSG:8uĩ#NqSG::uJ(`b: 8uKJRQ:ѩV:8uĩ#NqSG:8uĩ#NqSG:8uĩ#NqSG:8uĩ#NqSG:8uĩ#NqSG:8uĩ#Nu$r:b:@N4ʈS5:uH NqSG:8uĩ#NqSG:8uĩ#NqSG:8uĩ#NqSG:8uĩ#NqSG:8uĩ#NqSG:9u>o}G}o{[A3m8 5٦/?IErER2-X&콱(\*Ej5 H˹:b;]1$>HW%++%Z ]1Z5+K+&kCW S 2eb5Upĸ `Kˡ+RwbI ]B.9M +bb;]'LI1*y),8(bF/-V;BW z|qɇǿ'凓/_]uRyU_/B:{ըH׾ !Vrܱ3+~3ywR̭r穫^ZɆCo*SH6-:s]as_8!˾7㸩W7 ޔZi^ bYڽnlH4|G@\ԚRkM/?֟i}s=|3-_5}͡48'x?=8whN?Nw֤:>.ay/c|tGoπ=|<^gm=_V}89<>?i0<<OǬ1 6xH~E71}۱( j?}7Oݼٵzz:yMҺ HA·J4.ҍ7ZTJ4"_>׿ 2W`}Q=x3+܀}ԫs}l"9"lgAq]koƶ+Dp!u"E'0aF|p~&%%GcX53{֬=a:LoAFh czS_z9Xx9tׇͯ e?NE!UZdp~~Rջ%~]XAHo[^N`R#Fuk79ߟn[Ots _~jve08] v3O,N冢4*-^--nvI~ݒyPfq-2g%AJ*5bg$ ͤBa+:˃ pM=K߇QZ(Vt : #dӭp1%fU+4mLx\CF{9L?/zz*.6S\pN\\n<(Ȉ4gTj(} #I^x^̴EQ`_ çP3[~X FC|aKy^<K]0iνU-]"eX s yv'e_<\7<ߴ]s( %!L!ݸ69q = Su; ͉a)rD.1>ԃ1LΕ&V҄,}=Ғ->9n7y:JJ._Xў{g;pO)~nכȭ&3\WiyxmtXn=)һ7ݑ#,c0gפȽ,RXñ*!XfYR '֬-^O+:m o {}Hex^3GiY8EhׅjW@#A*:P` +\; !E뛉ӵ.}c{ A/tKpK?mvN}NrZjysatTa_/m"E0dĂfAy! sT9ʇ 4 8AOv5DrKlPɍ,v"ĜC_BPNj0Jfph4T"'Ir}ݻ󦦯*fiםhq_:{[O=n*\X޹)\M]/6I8/#xק2`V2隞Le3kWb땘֕Xɉij`: LZQւ*l!,|J)YaR ~3eѽz PAă˪at ~8<TbT -SArViEQ! Lis"-')a\*15 W%Xd imMr6GvH.hχY86J2Ru)ԋT>(_ھXfjp(RgxcI?}m j5$4jȈJ6͌w&P8*WZ}j:HcF>||js8hKo Uv0l8=qn2ȧMcokxzv8mNZ(f'mĆD?s1*6:%0 VijnvZSӍ=vuNUF݉ܣTFczWsx4|38|3Z6|3UlJChul~צJQ#+̨.t(J3o yQs~f)gz&+R.NHtބCYz>N3ʉݷY>ju@ wLˇT*W/j,}O?ɟLJ+y_<4-$ 76C4w ݢ+ӀLOW6=` "_#|K[tp9]+D){:Fb["+f&BBWVҮx[iF]ZK:emxJXC0*B+D{XsP*ҕ"kV:BJBW#+M76"Bh * ]i3,NWRٞ$&B*+m,th!]+DIc+ ƴ k ]\Nq-S]+D{utn*Ǧg ~忌W7o/sSyOo^ZAz0)і80)e癢,7ʜjyBp啛9"SLrx b!$aZ͎eFrF^'}߽v]mr>oUKirNsʁ6WRdR.d{ܳZy՚1r9g**52!,"PS(23n3%(ݧZ)`=f \Jȁ=fh%Mlib{is B i~|&.<ǔ(nS㉤)[*MM>SeMV bCH҆NwaҒÙlP;AIha _* !Ԫ=PK8ݠ({ؓZE@g~Sj=O$5 4% `MSZM?)C0 ~H#`+K#CwjR^L0R|)M ՆQ0 P" jɮF* i%jr3SR)QI(hDG3 "\Cc5]%%pTk#+l4Lv0=]!] FV6B:> H QRҕd\٘*9Q]Z}gP2AJ<@s-󬕅a}ld64 &U,iuz +F&Q+K'1Mp#ɿLsw;Fb`9pE4.6U_@Z.ؚYBTDt5t(XBWK,B&!VuB}t1?DLC, "\b+D;OWRO8ٮ99!\O\>+9쑹ha殚T[ttU.MO9J  ]!\kc+@+:]!J1> ACECW7bB=]%]q!I]`nE4tp1et(9J? :Bn<]!JM{:Bp]`!+I,th-:]Jڞ4g=,*.Mr|3IW)v/r?̸}`z{_ƽde|.oKaS$Ѓ-gd̈ۄ/hs  R3}pA5Go5%NkYv$Yȑ{8wT+A7&mK̛ͼɤΛo12^-/+үQQr~YE3`+􅦯HCo-G4'"/Y_n;9>Zxߔ0ZgɼoowV84rj$bFN{R= s#Qh]P|&P2*5I1j*TH)'"-fClqt •T9I)W ~OxǮg+vUWoܖB;Z"*PR]FUg˪J5dxRuɔf!t겓&$>4\h : #뤙$lN(:^ר /W_Hf~Z/ W\mCrUA&w2#Th\fA$jVg"; 5'ψNp&wIo5ۗW8_Til׬뛘h -*%TAT|S,8 b@P`25 ]1DRAɩX3) HT5c|:*7hb:ڪZuvq'u۾I% '?O'S8mj4Ŏ{r{_| vzc=j{kFlJ?TCuMj0jJ20ԙq-d{Pۍ%p?fwhp휃:nkkY)EO@G*t8)  no,:H.7J7t\JF+i \`SH :+R&\WMpAbpErJ J5$YWp5)Z Bկ]F cq16bMUlXŦ_w>UH+XauM=[^U^CloxE^+L5J}TJ4FX1Sld)SlRkЧؤҊi=)=>Pe$B+R"H*p5B\TU]\YRqE*]W K|k|-'"CTnqEm% 5 ;t\J DǮgLXʝqD\&U9ƻk3|FlWz 9D;#{@Hl΃k=|IJ-2u ceo?nT7lE卖t͐e]G%_C0:lF8νVNꀷڻ`Xr :b1'S{TƎp?[F&լ\h4[WI=Db(\m9Zs2:+4d^fnYSVn/v}uZVNq𝂠ePCIISe̖l=]YGIvY]Jjh7Uk|¿6ٟ}6ۯ^'}eZr2ZَpX7VZPcůqP\ xgBhwN\ y=󝝆3|,+N|4ST%z4۵W̊pzU?JMOжYMz\I-W$ J W$+Rkq%dM!@(&^ɳc+t)"Ε?H1J + ֮Hn9"0+R9dWZrnKPPLI)fԺGWexpe$kW H3HS]JŦ/cĕ+,Y9_I,fj1t\J;EWcĕ+t9kW$׊Rpj5FR)/Wc3F[OK0S㪟\}b\SNުJAZpks^$`9H:Hp5F\ \`e1BZRpjVCtrq%%XA胗E- ͑]AK7QS^,@ZJ7fޜWmT]OFϾ?П^^_i^/$)x)7NyIiL*x;[wŸ)#wVm2s-|[< {_)yU`1f͏Ы<ϗk;$_[PUZw]rbUM嵌pJ:Y-}n]s*V~U{"BXRw+nq !_}AVWK|cF 6t1bwZ+-o-֋f' /gqytgR5mƻGb]]`5[V|st3]P0F]9gaP岔` S꧲u )x>x _zvI*c)iF̣O3}o:1|[&k?N˷4=KaUEeu?}Zz))#0=\2-,;^+ۜfkvjjO<ЯLu:ӳU{٣ےn:Ӿ}:96u!:yr{|z.mwF_:5"WV\)In`BU"u/RؤVOI%4[3![U"5O7QVa.HBs9i!kJ D#[=Ke#NoeԊ{t_2ީ̃0$t+㹲d.sjO`e,kH[9ϐ4困٥qă L&Vyn\d^2tR \jiMdIWu7oo/m}{7[v];= D햛/WR4?^r˯3YlTx_Nwq[3V`˙~\QLCR+TNQN?eRd=Hn9RHWHtF+kUL+W؟z4lqh)Wʙ \ Z1t\V1 W :v=1>}tK0SOɅVh?f`A+pks'pE,W(WrV HC=j8j^PQP Hv:=t\J8n)] ,20tRZD0 dQLJЁXAɬYr Ytf>@4m쭲QTK&T d^FRe"pj 5ZVeYZ:`!lO-6] FdzʈhR@i%prKCAixWhp1\fbxt H0bB7MjwܐJǦmΉ6`NJ1(WV ~TiurX  v\\NjMjWҨ WWc׫3&W%XSO} ટZ{ T WԄ]KJABTC5\Z7x\Jlq%RN+O~\#J>P0jry${ 6`պ\cW]bxpOEpB++L)"JOF+or&$Rpj%Cjqea0:̽cARYu,MiW+TGr~x,ק \*ƽ:Y꩓b܋QxFP@N~::R Y=&& V(db" \q"ِRbv>X#J !M/\1$WR+HW_@nj3  \`+ulAT WcĕUFʒ+lx9ʵ•+T`ѕUb WL˒v`t1"Ԃ:Hr tǮg+q6S'םxZ`';w{JOڵG/tpP\`NjʕL+RqE*p5B\  ˓O{KU WR33jF8 •4)W H-&B'2Op5F\QܱpbpErA+RqE*p5B\{+,+d)" WN%oA!-~~}/UKr]`WZgx٫g׮^# C!X@ig׫:o/^u!\|XoWѿ/^3ee]QW\_oLkrCA;kfYݗ(wE j4{ûBW.<^#?{ݼW/?Uon؛7UrfO3jٌR|6Ĕu mj]Yo9+_ hq_ ܙL{^20XED%Nr1%*-eTd٬l<nSH¨UVsitBdlAI'%M B !6X븄ANձŅAv.+X`!Gj꼏Qm<ǣŒ V7?zI仧i:<{eMK5ZY`6`ON"9>oIl&z~TD@@@B嶓}d`kRKʹeFZ`yjMu:~Ԛos7´?8`Ա =@)T;\b9kM;%H6,SS~PO o:! tob4P5uT*JBV㶺% ?W13)/ܦrfn>٣[,k. ~u*YXNSvEoՌjpQ\dZVp 1|] :RP ov Yt"I_3ҹ;6x_'gU]q蹙 }DSG}4o!n=Cz qe?Ey"q" LjATsKc)$V&ŐߞOi:;wWIf>է4BL:s˚fy^y̫5gf08n&<\9LCsn_u~v}:/`4k.+ߥW /#OpgnK(Tܽ_76DE1W+gL(lVZ z:XJJIs -rG9ѧm$jll,t#N;TJBt%V.e #ܤL|ܴ E/;Lxmab7Ņ=K{+=UJ kE5%Q I5Enr5I|%0N q=k8kpHQ 0İ6T-B]ɫztin!^ ̵ -a !3}AC>L!)\b80ܖjhn0[8DOg]fNnnz~=bϻ5ѫ)vGfvKg:;ñ^ Q_"0≢tL|#7eƒU9.[I>IQfIdT؉0lD8GcMI'4qa;,O9GS eՙaI O4w7!ŃvR n'XJ:SR.#aTjǠZ|x<4exnRͧJ_>_ץqz1z24لxM[I!AEӤ?n]ZI"S I~Yȴ-yZn!vk`>5N\*I ;~Rh/{nSD)ńҩRgA̔"[l4mH_Z_N~IpOs]S;%_MD)3Eg4bg/튗BG+0s&6Ղ2LR: =K 9ME("U%xxeÜn SԩY,.Ai~dGFRf<$֜ڋg(iUVDz]z> ~0s)g`0_p)@vG: 1b$fz8 F֢T)IAO+:{(\?90{PIU~~7,{"C0 Nҗ O@*FUmPgϣXTD я2.7pU6gff_u|ґрٸkE Gg啣lrm뽽ͥ-*iIKLNBNajޮ2d𐈮V92Q3A#s޷,}K)nU䬴|(s˯٭T D`5ޫGV +ĨXs)*v̫$䞈mĜxR- H{my++'D:@tb'!?,#F"\Vx@PZ##Q(aU[&-tsUIB[gy`T&HW"&_JM?6,$y_sMuJ9INNl+#%DoiH w+ȪcF@(I$nLJd 1Lk<8S 6&>㘒v%NV:CY(i$]8Db^D*38Q霹j=94on|xx?JQ% THZ+bes{7kN+5 ggZZDbΜFTeJHglDq߁hAŸxo6~Pa//|13C*[?Oև,t ȇѳӂW}xCnEě7=Lbm" P$@#hC¡prSjNçSK3觼/ņYs0$:I/1![Qj$2fVievF_A[@"WUk m=KnX(P %!/F f y^ZuAq{/'u%z0XaRJi5c%^}̊^E`2؟6zBV@/n1a53YVe2㾝  MQ@>3X!lW4Rds0m:ARQ- 1~:Jc!B J^|;Wu[Fq>B'^q65㖦殃9$the@Ȁf"&kv1H  ?-T$ Q?=u`K%h!A/. Fn̞S s'-Pxy {?+gH\ep(?+`Dxi2py R!m"6TDJyN1lF#Cu2Ѕ<$iCIJm?mBa0kп5 HI0*Zqbh J=B@2h_Ǖye{B.9M:ŎYg4$RSg+)L${߅o x~w)hSUb:PuX3 ڻ]B=c۷N c֠r|Վ³Ãۃ fz|=X`bDZfA t\>G5*):=σs .ga9GG"]G";Yloxq1o rq^ɶenmt[L÷VZ3I'$do0y^@NxI\<=E6uC3B&݌U JOVi4[Qrs8)P6g0g0tp'.]ypA. dJNG<+s98{O vH%K*>y=8UVEq^v8J)<(UJ ڛ>$bLQכ{ªD8?=+p;{KD& 1QĬdZV ;Uy|a#W!~X/x%էA44L*f-NڲP]&#@;60.N"98XV-U ZL _o{|ihl,< bVqBڤORb-8fNGH͜n#7x'^O {^RceX3W4b_k $ id8֥T6;#cSlx zm"-c32IWiRG~v|U[ئH-Qxz E/9J` $\D%"!HCeJpa¤NBa\>U#^ x}_,}Fp+X2µCF[w.IࠇD}4 "s eֻ FkgTRƥսJf.{ю4`ߠ5 `I)A` 4:TTiNrCd(tAgH()@LFʳiS3*X,yj yaf^K4L УztғۦAv 5khW:]عFA(-׳^4z)C&ΐi6HҨKH⡦WmfͱJ)-WI@4n7r>|( l|86p8?}r˸F4F9TǕbďPǗwanQ׬1jD1eUL7byO#x(핍[2gɮ2W N.siqDCp?QXMh Ϛ"׆s幣xS4ocT(Q g UR9q#kF|DViߦٍIBKJb3BkE|<%襪t:|\[o>sTM̛a܈y7xF$5kPs4p4ZyMa3JӮ _B6F--ZBhj q)řv^lϫ{/K|2$w\ ^K#QKW ,SlIK ' RHB,$_}I lFu@Ga_- BNEN4xTϯ;RU`@s?Grh6Ht 0iU<#4+s*ĠM-rIry k!PCv܁eB<ҼRp4hT5CΏ1uHK8EZ$ *Ycd! !Ǧos@ZzмIW/OL- pzu5R5 (>tLssS W4Mc=12? J dk65k1\7@D]Wׅ=Mm_H7EmNu(o #o+=u8=Zj #i3Ώ޹@yMn}7& -r M_|# sehla6BC&LW:VG<\fK&o^g:~S'(`ұ] V֏ry5noqMͤ;@ 4@|\!DsNWA aB!%3f(7L׾5PG@ݵe!Li /oO >/(V C}UZb}A.6>ַς'5`}.GB 0D\} 0f=2O`׶C+9@ )(ε]k#KU# {Yjǹ5*WwF |-4vco,M1Lʐ&$)$DiU1A= $&+A JmTH0;;Vqܝ+FtZ]K}XGָܹވ>ul j3za#F"@پ,"Oo1G c^\D9i2"ajxb 7^ {* ܡkvT˲s=mL\''P0_qcSML٣+Gd I;vFksng䦭YXF1s&%7fxDy{wYO_jz@zkmkR#EV#{umc.`Yc`Q\k2:x5kO}Zoj3>>BܽXKbUt#֞ͪ=w5y llƶ(#xD|ult (CYݦKǙ>λXWn{t5&լ1{%Rtb:_JAwLk֬1NAO#KC̃.#'<2Vc!D` S-:%I^rm`}^lcbd(F`ͥH"p羪=ItT7@4ϯ;Z6*@&SNiX(YJj }͝WF)Ϊ^W|y)psoe_%CX9 hVIicc<>ӳv9GqZ&GkΪ(HӜtJAQK#A-IF@= $)y]6;o$-%}~-+ucϐAAb(f9k~ǗB>a1dBJwtKMS=njO_u}ONT۫5ƙ}y}й4( B=\kڬYMM|)sul4K͖uΑf7f6_67Lَ֛P}-m;gw< T]ȹ04SJ ?{8pҜ7F3iȪ/=);ie;Rw?⾕ԋn˫[I{njփ̹ x b,Iqj5k3~6ll&}pƩˣdLvXRj{y://۷[zH?Nqb-l!NS_6P} ]l{vKf.}y|vdI„@rr~G>4Tf*_ݹ]MRHDIwg ,@s8J*+_~Ur"'PSfV3>ͭIQM3~gPp\Ϙ#m`WLmcԂ$#vb?3Of==?Do8i/7Wvo N6_,pL @/!I (F1;Yxdza G!c GIċ,9&%'| ejU՚̀d}GŘ$si:ϥHr U8fYh¸ Zj$I=2fe{l b`2曭R s"j؊Ql1{x;r"$5aS N;:djsjymX%T?X|IEi.KfvjSn8-P_8RYB$%נc5$whPQFT܀V2+T6ay& SwWD*[PNd&*%9+2HTNR+==WZb98o$kP43 )pկ4U#)M [Hh!%n\ h%C1dG=]r Vc0uL!BT/?7 Sx% YiiN"Z%Z QUBnyT;dx| O1$ܴdD*8u<7/`}+m ڭT?il<'!WĦ.#3z%3%n#F7Vj\`mpV*2~m^ԛKW9-~ݦgQT\&Z d9pbV<#\-#>Ŏsoۜ9d95P)v ]>jF>K k4+" -54%p[yKdTktN> *as)-*2#L|ټf=.|#1Tkq~Z;=]u[c.QTa!]vufuLJ2 Hd\H#PETigId?hCjhH:b uW -e9޳f!zD "b4¢X5F'0'WRF_Q`<cj7,G!jQRsW|nxE< 2.!ZQI!ӢlB/׏RծRu __7rpL}hR|xH L=lԐ'~#oJ9pWK!1K#hoڊӭRɵ՜ <"pp  #b(4CD^憐F#Bzlb7]i7r;ݩ. .1g,?"iniqq,p߳boSLn+M/_aJUoíCvf A2l,W8Glbpߵ Ҿx xNa/ST XY|}\x= Kz[GTT||{Z~0+TUsfi9S1Nv0f{{pLs:V228upi9>' QڲgTp5} xՠ/k_!9nY }i/##o_$ԟ9p( nH#RP 1mJIsvZ5=QF*QYJUSR!{ZnvHj'{DNIy hOCw"J%4'@q>AZ2u|1Q=J#{MT"AFMuL 74h6ȈT戀ts% ωnqJSzI"1 }R#D`o *1%Q%hC$p-,ɓ70+պґ럋AD vi߷hzK.8qo*j?_Tˇ}Qr{0Qҋj>èHeG|>O7_~<. DolKOɯe8mKk[3/X|_?~͞f0KXcڱqٱ:4G5m}_FŞm j,r8zϟ珋_FoG؎Ѿgc!qGedT^]>qEml˪u(ʯ=H}1KJG @UuDS!j ۇ2/)wCQ4Kc1s)7dՋW򔃧 WbiFfe.SpKV٦ߥ3Qtq2᪝ӃeԐ(9+]4eA,֗s:WǾ+;i}u!΁hחQ!5VTDOm~C}-~L_?i=~x?[*v7_ vIWP͙Hp]1 `*3“BXx/ lȽw9y[U khp!:!k&f:py]6Y9/z["eWJ}ܯ2.FӮq CcG9V~;ou,ubfWclt1aZUXNJ+Rg*=hՕc9^Zt>0k3ͭIQʠ~typ)ԡ9R=`T?JVSں58'r(O>|.e6l R^?"/_`^M=Յ# 8-~UĈuFn"x-('H>-+mMJ.L(xu^y<<'Flw{S'k\jiBƏ^PAE*5Pg4>Gjqi@J'wb;Tio.\tRg(q6ooQ+PhvHK=T}]v,imqݡrAIt.hǕ=]}2 RAw.}ðlkǀn'TKP}!A5C8T)IHEXt噛/IIzFdbfix Tm.e7xrI|UeںvѮ#ѻ-m%lk0PTPYd?)hl՚2)Eڻ2ʖl96!MYq6Q-[1.Z:#LVSj[ 3ҜQl쳟?Bx~~p 6(f>xC$!F߷4*p[LSm뚍RRw%vκhDٷ sC(NTfsI*r̛t(48ؾY'ޓ(M˃HFIe7qr':a9젃#E٪*+<H *\jz#Ҫb]7':l՗'wOOܬyd*.sc9%!շ9dzKxu藛7+M._JdtG{hX[9)9*S[[B$~[#dDG 'rPKٖCxM,Ρ^ 2zIM58<]e Im`ϓoP V 2.JZ&eϗQˎpRN`嵭:]*mCuo˙DJ[S0Gm*ޫB\XqQ+8-sj8'4L~O {&Zbâ7[ gZ BU@Z񱽡AoAc1kF ;5lUo,|CASe=tRaRYڊkWNHwˉSC PUN$PXoѤ㢇_wU 6r 2..Ag9ʼn(v}cԲ`&〰583ϊyǠE/κYy&@P!/x)(lr>8a1o{2JK{Cgo'7v%|(אC}e|&>rU.g'"2I Z"[Y.J L:W+¬^k$B7ϏY@q{|lJIR0#,[]֤Aew2ӤyXȭUU8\*-x Ts@r?{OF_i&FqI|&B%qD2I[,&nR9[j^׻]!LJbq%A{dv"tD;`,4D!x_ 8[.~LP"$Ze?z3lD ap^L~T žMWR`Pb/noa0E 媳qů[MW4C J wqs{vyY2=zaFe$`ZoL1EM'S%P&ryJw`l1h0c) S(}cގ֦Zs0봶g`?~ej{l,4v,d|Q@(qǩfU+o.]ݍ0V"V:I:9wlx(i@;&ǶFRQHY{.E iI10G.X%z܇ПWfD&F/V 9&k=Ju4l9U^.A4֌,`n,0OLG ZGZ~5|t+ODAHn{tGH}2[%Js(@ V8lJd2z\4 E:-zpPU (H:"(H(N| v ha &rLKeL3{a;]gEJI0I~65I1( yIvجJ$FttWρwdRJ5xȁzc(0mwFIXT6me'@~Vur(c?\GWY}VJ1{jl?ta-Ov@/I‰7ӼVE+ "FmLn0{D~s(0Ё>+^)XQ6 ~Eeg d'ݧbbj}Za,`t% yKgh8P,JϾeU߅l?uG\5i:upQ(__o 9BxsDXo>mOYemvK 1gb͚5T5/^~ (A:1C.uuD[+FkD7F+ɹ983E+Dt%F\*/9MTx[bKH2Jf@S)F(0%v 2DPtUgGy/i7L]7EAKä|?"Ld s*+O]/\JdlaK_}ǩj65\d1Ti%2ͤ|v;<0)?zbf;V^6M4{*Lc)gQԠjlR8uy \li'|ݧ)S8&d9&9Op~vݔL@V#+Tb vc46iA(K4\=ӵ9 $$l+j{8MZ.mEr-hW>cʓsp;ML}JI/k7zX!Tj8B~:<@ ֢@lK~){gL{S/{C0VFlI]ݛپѸ`4Lnqz/E ;fU؊}?yn2yN%Р'p+y+9ggrp:~,]_o3bC%aRrR^q~t<$su;?< '&M.|M fi sj x[?S$j M+y r]msnb1& In``AJo+C( PVrLREJ˰ȃc`K9f$ۀh 25ʰWRD\*֣Mkfj; e(/aʰ:&fa#T% vER4%~Ć^nxȶ1=۳$/lI֨89H`Y 6Bi}TA"|,yQV"NqDdyJ/jx*asoAhWC9 n\ 5i}p޴ERmfoR ,W@)\AK`"qfp` proYʣ8v6x(oB1AoHG3 ETH:(5DFYhʅ^mA18x٥ObӞUͪ2݅ҹDJoB)*~A h~9x4 `ǚq^m#*uFmkY#rzʐtmǢ \TATL7ZR1BJLsS1z%aw!ѩ Y*d\hdGsSj.۝Q.-mxdρ"3J&3lG&AQ:TX5@ȶ `JER"sJ3[9{1yuY`s Ab1D'խg[5ZڗS4lC ݢޭR[j~7v+ Mt(  Kt)CTsT,1;- #؀& #ThN7'M| 2XdgT\!u{޿~$=y#a|(Ga/]҅;G@L@Ar8$z,2gRAu@ l~ӹ >]I,Wz`{cK$hDb#T`JଔݜvW&6^ zP8ȣ6U3R)Ϝc,qNjE0^ 邦Ey E9kфz]$%U; YAH_{?⼕y^*9aAxwQE'f[\DgQc)JW`9_kӄ|iϗbP )JhRGG%9? !p2`> μK:`4JQJ/=ʗΠuPʜJEPB Jݖhy?9wM[SL;|xLU"5,O=Uɦ RJOɧTxzg9Uj1Y¥@;x|Ji+=w5׿:sڊr TrT{;GRxxk@FEQVv3@Jvɚg۝s?Nr,~^j:=ۅ jSS*tU{7v@%깱;׆XMuwa53{:oyMbpxkTn~3{tvrwldIYPπS=^ )}F0${ "1ia&U7Ki%a\+ފ od13|k}]ρ/56O[cP.k=dP)oV^  OI6&vMI{vDXwf?6,.AQurJGUKN `糞X nBPvA$=ZgzVfk~3sudIOBaE5i]N4 պ I'U, i:>Ti;KC,NJ&`U&O)T!Q]r̎OERTmUԤ 9b5j'5rlr MjOUT6&$:FRe`}Y靌tX6dV<%<oX\l읗,{ F_}^5woczIZ`werTI9;}n-$[)QZ lϣhExbY˃4i٢ZvZJD 3>K0$U1:iMPl~ פ,D2K$Uv,VT% a 'MBڥap:Ĺ]:`@#;R+K+; ƒrxǕVD R(lEM-#](%P/Vf4!``|F 8Eox6}66# yP2 c,5k@>gcSThĪ֢PW'SsK<>Xi+qa Z"uopxKPEÀ;рala32(O |>;zWU&/ްo ''fׯEa lb`HLxXFb1k)3&L+ ƈGs!VZo"0G#)NE`Mr)$it}YK岖T%RPseJJrBxG͇dVvowXkM5SlAÝTRsCțV/'?JkY'B~õ1o\X&EU- &Zn6p[Wʁ&UR)2*$5J |b6<:۰GfE:CX`[e[Gio]z&n,ڼ32ny/hcRf$dDRYܬ)FkR!D6#F`gW+tRNd3;8@P)!I9qcc% }5FBBEI:0&,Ni8 JG)(Ľ:\ xY㱑D`mјR+,r1p !9)1';8f&!#$r,9+H\a8:!5 bh5f,8 QAYnqWIM$ZG@Y0J2AUE欌m B5(];22f"ߵW:Ġ"#٩JFP,Qafnà=(lJ)"FόH|sV$F|DHnx*vMzJ!~>'edޭKd^zOV&zo狷͓6/kH^Twx؀]_^Tӥȥ'+m]}i٥>vwLPK&bv*`R2+v¢dޡz4Ǧ;㕈zz1`z#E(ŸlG=|Ȃm3 IG͚H7 nKRY" $rnQ,.!QsynZSM}β7ɋ mg{D&">`M^èфZEAc@"(R3E,QJH"%>l<< )i+?͕FM0'MUja+} @bl*NjCai`/Xgnwka)O:VC%vTcbx&0#Vj.v8>k4\ 2?})g1 ۓp,fQ-يjXq93=LIa)1Cxݡ}f`GN jyA,o坮SQx_"NW:M7&{VZ,['Y}OF_mc^&vF䨈ыaT# m^ `Ek{Z+V) Î5CeA]"Zp>.i嘖śJs:nkqd2^a8:40f0VuҚ*OUd&q>P3" rli}4ȑ31b@y祟yi!bh!X 8?~=q53Laq˰:j$6W{diZs{x5v3mpt!#]hXN"] PM SOR&.r828VXRHE٧ؔ ˑ<б)f*&wv)VŒws{}躀-/q%EI$BEngΚj 'δY8MzEE}#}x4KEEx<]e eyrJ@<={mA1 )lA lrᨱpqm,/c{or9 [o.`B(Vk Z H%Yc/?? +&7]]|M[fjry,%BJux\E&;3vz4k1~>dvH3-,ylu '+vݍ`Kzc捩ojVgOY<&,BW*zE9ZGV| )d -ziN;|RnPdI&K0ƋBlъI\g"!y`'Ώnwo3?io4L,9w%DeD5 kY!딬95nn[Vws#g #>oq)3(F:6O{r,VLKUU 9J U*D#ژqW/qH8|ɪ`=0 vf@Dqqc~ŎrԖ,ݶ L&"z?ΐ eXZy[\Q3VB܁pckӶ< ^̃ӆpSHjKwY>1ϡIC~B7"~9Ċeh$}Vr(#2)>Q73IOP|A::iJ`Ga 9zŇKc;K٧VoUQز|ש4ƍh9rZ2բY,\rPL='R)K3 2P1¼q3"Erتggߛ zr->n(#(bbZ|ai״7!5Ҫ_c('y mOْ1Y]n $:y^1J0hTjDc+}fVB`F d'iW0erʩQudM+oB eȧj Q̯XJW0/?$So 6kcK+Ϳ4{6 +CjTN,w9f3ueHJqFo0^FӷA30(9 ?z!#Mmal5Drk!,N.xd{R\g* ,܆`&$WěB+8'oZr)VԲ&0W#  +;v[tXzX!g&oc uPQs4)|+{bNQ!/{ O'Rb_gWw)D ֡ SSt? .5XW3'rɐ49>d"X[[4\H_)#L]E.%I*h*'p=l,1$ LsۮOY6a+DߜgF߷‰-yȶǎ]'MD):ozZAs4q ZNސYD*I,STEq;j9><>+Ol3LԵ.0#bSùɼ;=quY,9MٶKSzGŐ|%P$%bK틊8l)q6c"3{T|sX5pm=Td4)x,xi@'=sw !—~!0/<"%Cw_9=Ӻ3$ D/4 >zЕ)*U7xV`u!@ *R!D:*@ ` Eb0a90`3/9{6@AJ Ns`hlo?w{~(Yvm1O_|?0-B|cD³X0n,<롤hUE\Ӫ7>#NiR[UIY7u9)STi2;T:YRzX7R`xDd|@ e3éBLX2P:K{f ;f#ײ̾$+K:g̀Mxqm0^P;`ƨ(5ˋQK=KnT [2d&`\h9ufM3OG7uRM>{@ ([j#YԄ U'DZ &˵B1'9vAs2]bX.;g|Q]͚HRwc5h/Y B=h3UO5Q:4EgL~yfVa]z;XvaY Y xt~|7{AR3G+jcტB&:?<,YAO~./K~722e)m@B牗+l`זAWs`F~d ]^#>$ɨŋ3V.}~cL%{v8K cjS/G~^/ +8N6os: &zɳBzOԦ|@5 sPN->hDTOpBQd z?`tl2/T@s #JX|<1& Oјx"b%Vi(B[+ &>ȷ*&$ݼD]UFmLLY\]VA.^%+=Fja< ѣ$zҹv͑_A:H [99H93kG2<朙cRu|*!eəY93i-$neҘcJbdR$>J|`mr4XǴލf]!`걣sAʩs YP5eDqj̤5/e unR2">#tb8yϪjmʩ4er)}-/ѕ#;F̖sySbk?_0Sw>(׉SbX>%Vo&V\} \,t( 4f奎cPW KEAi|C uDi7/߈qgxf4 +dCfL2TW].a.iz  gZJd204Yb2Vo:.S?*;M^d Ӥ+3p{V* IUU+ygن|6 ~M6 'Ѯ "01e2ېXq4nakG[g+aG&anzœRKh0͡l%7(feA(^k !DWB#XNF6 )H9< zyQ+H6¯~٪5ASтE_>szv.7X˯pJiXr^N=yj,sgǧ{+0*]9S?HM4hμ\\xqw>^յ]P0msHH!a?yȶouN:"gnD_9J2%KRmRce.&g4B>XKbZB h69QrƻY R}u, fs4m V:V1dgF^-(0uNBJ4Yjt7s|edn + )Y!]9r$5u6G8 9>vfD)xV힧jdZ&Nv´T$f.vXRQ!"iZݑƛw^L!Z!jIkoR@74xɖo2[TJH0͞]1Q ԌgA+`S= l(7pfCy LF'k2YDl+ϳI3Lѭz5k>C+J3P )ƨ1͏(8q>R *ʘ;b žͪ"OW>m4ޛ iTS7jQK)bI%"D/\jD!bG#iu25vM, Ri|HyKh ]ѫ/.˪4ڀ|:2)Q[0A۠jBQUL5 $Dm1$x :hvʽZZFܻe]˖\v}Y9$ Amt@.IDu@6M|l%ՕcйppW]LiDwݵzݗΦroݕ]͹<򹻻Hk)5@k:`]#`9cL^K=g[91Sޕȑșѻa8Ff!)e)w|Mh&5NiLF5٣w,Q5UG_=ڟ/F滢U֍yVXqzOhTj{BdծM`|O Q88'gE'vHЛ;ڐ&YO? ´ti_;/ĥcXr_ӑ LibVd<̘yU;[LЦBldߙFW 2ٷ-XpΆ*T;& v+kȩd#Vk5C-&[iVezW tEmrBZY$-/"ٰno0ݿ`"A ޱc[ŭH`6 SOX|׷0?H799vȥ2U+/R[{\ΉTK6ZAc'1 IwƂ¡Ž J%"8%clIߜ<۫Cա?noXNW/?:[_ GT#.JTLDzZV<;>fFث6?(xLwi9^Z=HKۨX>3AaI˒&:]nzCVtd0t=$mIsC&׿I(cxײ)B]ezixM. ;ae@*fEvYlb}iפeՅǍJffkm#Iet3|=a= #lZ-ybFRDJ\TRn٪F.2#3_DdDH~z_VwóY4ˮuT:.=uatTTO\x*z?.t:jnq]攅 9>;~7{3/j4C`tw?{5گq/|o{wf_MUwWz*x|y{O;20]ە7AҖ,%|a`,^m YT֪"2ʪR\cMZ#GCnYF^f܏bD$i᧪5fZ!^k>h!gSm&ܲ6X0E~N ="Z.K(][~Rڒ\&zIƭ4ގWu{쟱C&KyCFBoA\,$..G9ۦM5lct%lxub5X0kzًN}+t=vJ$jc9EQ؃7Mj jߚt"\:a64(6%!ذd@ĄYOѸ) rrى6Jy'\Umrdt Ӿ(P@69FHVU8 #'!=ǂ- K-R̫a*~Put[Ρ3;r/Ix_`ƥ^yG%n83X8#0zGθt_TrjR?/F8,5z7y7*#V<.ءIgZJE+ߜֳJnj/fA4M@rtqxZ'%&?okC5#钊7i ;G \aDe_su gg?]Os@Qtv4=Af &Xjs<ɫgx%Gy?50Z,j> P^ Oy tSE燾?]i!!'Ph]&k `v[QvLw%#q!u'Kl)VSqFJR˷~ kZ|-O}m\Y=& xZ= <+9Krea~EM2~G89 uY.i=ǂ FK\p{eGg7,\oG Ϩ~|#ٳQ"9u =yO ;/AB)gU2|=0ND0W`]r"K0 _˽һ.;D^~je9"֞aeKP-VpVk;G[g\My-%>: \ag@xm8H^\0唔_? Xfu>Ɔ$X|E0@Ĭלǹ4 /He潈UW'G*!h[{f4dv IlɆ-5}[mg4y 9|XY*X|XH۔NҀ[30j&3,8>09kzSZ4T0A>8~[GrnzvUd>%yvQɀF#R4k0ͣ]@&i~_aɏAZq.zw㛡l$_P8'OsȑGpbOG8oC^*E'e5waϓr.*lϽ[֧ (wշ= ⃷'#4Ӥ/G:ap?_pFǷMu.8Y|zsx>hQx !b=!ߐ{aeSgK >x$nh} y1VI{>0[u|嵺Su\ӱ%i۶/lal!0jן9Dxr9ރwWEro,a^{KD{Ej_@\P~㗢9F%Q^ץf|Q:>vsmiQX[QqcNg۾~o_]\ylzw,/y6ܥNJf&yVnMӮRr!vDN9MmԊ1@ZR"!P9:KF-zAӉH:3sH9tB Z{shmok{czHg~J3'3.۶<-9?әgw>c,^\N+8ȣ] awnkڡ;>9o.XP(on4[䉹3{j(OÏ0{2cm2!(GoKK nuBj׳̥u%n_."% oJe zxSDwo.6-[g.tnZ+QyCY.J+٦\}6Ⱖbc&֐M91{mD ovہic;ɁڛXm3bQ%C6B[pC&bxgѵڂ !ea&"WM˺ֻ1ښ۪-O1q NKh6@@Dզ/Q;Lʱ0 } HT[wĽ7ؒ% [+"IckSI .CD[+ryN=7I`S3%+ƚI# ŀa%:5 | bѾVE2 R{bnRcC26Ĕ]դjBlL3µ" hY:@Imm6P.5h2g1a4mm?=f_DF+%&Urgs1lc= #f *ykưZ& &&]EB]m-g(4"b8q-I$Y/g.!o t^zs{p}o xhow {oًa !i2Y#dw,F:mJ,`2Hig&[ 2pIʋ)&1"h'F`wiY Y"GyCbLoS5 \mbGv~)p}66Qy|VfT-_*5&Jtjh\t SЅhYKr'n'؊_Y EEpvg}읯mpK59#"62 k+b0DGDՑ$- qyTEDV{$cʚ&,mM:u掔cvzC9ul\L)4%!21ҮgqKz3TcpEdJ>aI6GHS(jxr"0ژ*吚L^V䃓5Ȧ }I+R6^j) `L+R"iA{3" jN:u„2BXOYS|ĵӸPQ[V}ejqz#n6Λr-UȊCvn7 4kI [ ӝ;luOŌ5z-&.(dc>73ILӣƂ )3j9wMv}N<$RZFHM[[\MOܰiBر^Գފ@2S EC Ql:dSX#zbHDHhPp*ie uCG\tQZ ^C;tL+8&uky}xZXtUU-ʛPSղ=̭I$M&DV'Ķ@1P~v!EooHڜe/΃?P\s Pʡc`bߚҭ\<1kOAQ# +Z1A]`eIMf9HЍK1YAX%G!hB&cIYPbǦPTR!ْ+YANq zW9[Gyo$@\\uOþlSkDr*KZlI-yD >|Tvsjuu fVcWo@RO$H FW} cɨS6bdnMLE|dDԚRn bP|QJfŦOɐfoeZrb>'N%ls-m|@n {JUuj0!ڢ`TC=|4ȹ6aM9"n~r>&<5S/Fx~#˞5a!N&˓͟ZjylFC~- ::uBh$7De*4 [Ȓz]sȞj^/LX`<MѦʲԤ>A]ŵ |y\i~Emiڪ5XYloUrƃ61 P0S8WtǑx`9$g)D; Cwu54s!ˉ>!Hʟ߿so&JOc k2zOFn\hia1R(M!}i;^(%RU+zqM~Lu|hv{M ]& 8rx4jBۮأ64u/@^R`,id\*n9$Sx" N]^QfjݳN*tqĐ rF%BZ:'sZ #7›_͋=? k2I]Nݻ/m1g'ȸ1o-`hF˶w$gx2:;oc=2z&7nGG[Fϖs׾E`c?[@_!|lg lq-s#rvڙ: 6sf hgǸA 7F[L]#y*Q65nnv>1VSJaxHvtu]aU1o]Oqk}'h\zC0D xr:Q+=ǾF8=ةʣټnhG>E [ڍA=3&]3#mnv-vC;SȷټQshGuApW9 =(Å&lÀ6m& q%睨ߐ BvB!-7>FRzIȎlln&h$-h!/Udgނv{fM*H6h7c߀㖮ԡv:ľhg;,[hFonv>$ ~/A]23sjRRI4'[f»ƫ=~p=$>!r5NM"]^NIvo= O4 j:a\KF+i0ރG+(utT/b&76W5ޔ;r P@LQ1`VzН^3$q>h9m?}fh^b}t3M1kQ&Fǟ~uN"bҝ22MESwb)86 ۷P]C iT7!~tv\2k(KnyV%[gzݿN~[ܛjPe ZS(I9+0}wYU<tQ!K=4ƚ/W3#L hz/IqpG3ie:}Vkx6h8rµ:l ΍0l-\{$xH绔`W]v|B`\7cp};Q1nfcC?ڇ3;tJ N8kįlhS*Gɏ*~K5Ŋ~. 4$-j(â0@!JKK!2P\@Qb^2G鍵kb3Ft)QY;>7ؽl.ဳgLAڦ8v~g`bm-{9# {֛T0wa3uniO̓y/{Dœo­ F>&2.D"S-rG/*$eұ$uƝ2.:j+9uFzVC݈7BQ7cM G8 <8Eu-əh̊%~(S@QIY08&^9!Ϸ[ n*y(/*l<&Yr[ j Do3Ňwb7qreyWo%MԙOsSu4h$:8cUr*%ȵC ]Rp)'aV+MO\p~.IO ?ჼM2qMGJ_Y3矘>-"عfu9$Sຮ?Qiuҧ_C;pm/[\=ihS34mR)4Z,ڛ[> ®g͗ѾsOYdV 5ETa$ö^3#ָ:Mvvz=n28œrߌy6Qn!hb>`/_%9̑k݊av=Zt93C .8eACp  pѭ8'Qui;'>IDc܎I6\жA4NO[d޾E޹.;wn,xBg78qc_=RGt:C2y}0C<axY1>=@6<1&5 jW0o8HW,ȣXs)o2_(kXO?_ŊyTO''+uYj%X  `\rfnC5GDjf%(cң멮IQ˞~+?a>}lSo7?lojqյG=C)kMB]䩋QK yQ,NH!&g1нOlJJSƎR5mz8G/)I@ D!᜗w;x*#V%`?Sj'L8 s4a0䩪`1]2U4B:6C)e6c -!刬9hH(>_R$s2_O3M[/e)Dr>)1\;1;$vۣ)o"6SnϦ5][ 7[vL5vA3?dgSb5=M&g !Nj>"\|5BS\J20p3Sjf BY͙n'p%Tf#dvZw㋀`*^PBr#Zo&{}NJu-t~>KffFu PaꉉUheP<#1b.*-+9V̆ZHaܩV9S0bo߾.郡"#UFZƍ?2c,ː D`Rw$eaǶ?ojs5[(EM_j_c!n,W2}kFEQxA0@mAЖ-ZN<[l>-,Cɯ,VT-40d3a`4fйly%Q%ۘ(1X UU] {GYL*~)Tz{tحeQ{!JTpO!P2OWWΩɻ5Fۘi]bʘC[F!(CѸ.na{mIٷ_u$Ԛ > YgC(SPWЃ!I-}F9 ]={|/hrj6{6~i#/fjiT8x7 *`|vj=r pE4w}=ͷƜa)ՑXYU= =խ>A.<{쇭v@Wzanm;]fYBl{3{hŎ=ܫSa~WVL={͛n<Տޟ"<*|vWZ5f遇?q\/?a|s;}\W:Y.O}|XǎW/|Ke 0J*TS*C#s)pQ<̿xaBcnB-2\]}sC]_oXlucw3*=PisഔqC"iāGK{24pj#_˹C;wi\^߁YE*mqm; {ʬݯ}=^}vm1_O>X35d6aƨ]]蠙C=tuX&{hC#X0`>DxV~ʎF{le2#)d~bĆp 0g*'v&e}gLL+ NiWtuī>|(3nkNkOgvig%+$:g}nuy# fn6%JkRuPF݅/?.q. /anP6 Y]/5P} :1U^trN;gQ:2|Zvѐ.D]dZEA/{n&")|⁶+HYb_)v[ٯ!k3mX~!?ؿRn*CZ~W_n7 o˲opܖޯߴO׏_c>y>3ܞ)noU&CV4mU5B Uᥬh_X2 gζlygԉ)" l2oނ}a I;9A=sX76.ZYG-#h .NHmo6+S" =9E5!Lljy̾q 9qFu*/=!(_Z( Q}Q4Ԅ:fɖ!jܨ4`"T/z#b œlKWm|5!xR m݊J[z]d $XyT@` VY1)"X/gcaoU}q7m}ΧD)I\sۭƨ%dNlIgwN8DO8m4mu:z;%Ib* zI9bR :oos1* Q]W L@H2z꯫)|cOF}@,(h[^V c.>Acqe"ՙHmvkOTipX4ŊYQIC`&C1&!1M&Oc^ʭܠ*T䫸=NRʤԦoG`?`ru` $.ZBTیC(̃.X\mTVP)N`X*0LYX$d#Ⓒr%g}߰~q֬=v*yʴ atЍ= ThV'y]t!kV"؍ԙ::J_Q٭/97Z3UEa)QAm7}IFGMF*[&iekt#/G'bUi{0rlk"~JMOY{mzr9&MG^9haj"`+?3;{Ź=&WK|P51wA%Opγ)ׁ3CD.Z唖<:5Ҫ_ 1ŹM)UnRFLn5wnu0럶C[v5r?b#W"с@n[x0;uz [Ruo6syOMwi_1J?d٧ϒ}h!(CR76Q}q5Ka鱋1G=l.F{T3xCIVT:5| BčT xk)8P#Xz[jcΑb ߌk6#ǏOח珖$mp(i*^1n'/|ϕ;|Yk!y@KŢq1!XCf<@L}c7H9>>sbpxeaNj Po:~TnOo|HpHCJo sCNV,/v Y{Vxp2xa5Y7;Ѷl{ ͯx+{,,!!%@VGSö&.v A׻6 <:;8Ѫo@82`KG>l(`hRY{W3rT+'q=y$SWR[m}N}oW:"RG0f5]<0-Kg (oЅ.qę7"3=w5(s{' [z΍S~j`519nd]~[qĭ[Ξu۱u/$Jae*~!3U a)ve~ɶez $SF9H .11%*EU6YKt{ٖQsyݑx>xwERL%ǡ%tu,l٨!(>fɚP} &ob&@R nFDA_`"R,8@J\!KOXL*d1iuiF-(`6*i NEu!eT*瀥1s@5TSuTs8gZrMS:E%GP!+e2DڐD:`Pvt~i^J Hג1J2"Oj}7h ߴ~Wס0wKI,%&cܐHT5wFK{󛱜&*뚩:[8B U6U݂1bVYR.GF4{yatyjbBIXK)茺]5%p#R&A-Z=aga R-$OeAc>l@Sc(6s0oê>~Ls立Og7?.`ƲA4PphcύM/vz4=6.hrWb恝ma;9ߥr:,x*z`n!ςOJF'Gi#g(B$@{̭29`G'6>k2NJ60k1=g̡u |=nxSSvmAS;F|$a0O{lTNay*,vjשjs2vjDzשݷ0GjD9`PԮS)| &S90k/cD?]ԋ5߯{542]7_noeL;}jî0~F3q<ө6|NkmXnΦ_A`y|#vq`EbLQ2Iɱ߷zHICq(9"ECg몮r<_HD̈.Y@/)yl,.Ӝg˃ܐwqgş7uK_.u!IsηLR 2$)׾I2%lI]QڈL,"֧Aŷ.Rv8s[`*H0ůpxه<cOzwYW涜]Wkui "K81`B+.Mu6, .͟oqsHp[q/#22; @ q"kp=pqLW1 W@ IT߉e7)E7n|( #G\yl l?X;x> Xz9a"6I߮֘MI>]bKL6al0D=7HLհWm< cSc0PL]v%Df$q}9'j9` VdeVj+uUY%zA߾yMFK>Q)U0#E) D rԧɌVFFsF1*%! 9mM:B;Ęr턿z㗠NppY43BS&\k~/8 13Z;B4k;v1nWe)),W5^ EDWDj.aJʧ؋zV˗'t !7BpjP?Lo [xgiDZ(F2M{; #<95X T)/Jc8e-áp{ށS,%J j>ܕ2ͺγ{@LRqe$SirnU~C64`)2V)5HMr+|L%닦ǦcMR^" D,1>@>D{#]>=xttl=Gǝ2$t}5=uo*1z:I޿;b/jtVrwceDťzoGa'|<XhG-C7܎%qh}h* S>Mߦ:v _Nm>a˱'S KIp>т FBa1rxIwr FttW]=k0Y)*@1SܘylZ!R1@v3 U9ŒRų{Q4Ta%ͪ9\IgLd^Q%Ej_E{t@QsVٴIӮ^$SyPLXvLif9x-G4-('g;[v%Wpu'姉*[vٲo$odn-;Q rHJ͚ N 22vٴ["i# "vj1aie6ʹcJV/!Uңb<|3^Fax ]aU ^??tSvQkC8ܳ]u~["%aףtF? т2z^-u3}N%'!TEPJKL#7$<@iYrX\7r:{&`WG gRʱ/li I=,V}BHhLv3]Ȝm]Us`` 7y51FkgR$#ZpSj95!pCL JтiY(`\h tcʴ a W`>svzA>#Ju}l`l?TCƏI>2p_?eI Z_2}zuvo ݛ_ootB(/_1"+r$22_]ۋ+jFr>ƕ.p4nGo}']Oҟov4iIeNzv PB VFu;_Rc=j`nUh aj|z.^^JF'^P |Ws*B[!^H$P̉+1@%!/=:1y2r.w/^^^C=/>tkI/xۆ"+, Ϋ ?kHjUrTaݧo̯0Hɥz0?ߩ a1ЂgbhQW5x~yP^qaBnv0o@nMMK=(8)&j%"NsNi"䬵8e٦>pZ0=?nL30׃Iㄯ SE{ΣАvۼdk>Ǡ4D[[%4IY%'4jr1qPq9g/&'sLN: Ptm6"~KbpFD 6JZkn Lo-(Ç}@1aD$`pj$}zDQzTsn0|snw&R#q*3;Q"iH DeqVXv,V፱x.eO\47z3vD|T6Wi'72aSdr9$*KxM]0KC9@iv6PYDTkU!#s a3dW*Pf B (&f@=- 4ESz( kj2y jhrub_Yo0n;_ W0(5FnHBf:l ɕOo/XyǕ'F7\]UI4r䑧6e4?0n75dB#xeԤ bS5$;AĤ&>[| 5'ϰQf(ޚϚy1pBuO?tfQϝou IvC ="R*!c4OaNݓN|P#臒I%mJE>VapE'OTRQZԬ6Lt/hkPH"x%F+VwuqZQVbb~ 7ӿ?Uj- ߗK͜˳0L5|` ӄMڥIǎF;D/_(1!kPzw(t5JMtq#Z u+zY fpQ*?^^Gn8ni)z/Ւ|i4jEa|p$u:SFro' 6;>qMkR!Lmׇc9&V9=%Y 4w;fmM@j]t@ye$@I"#sw}C*Kԛ$yKFs8X24G5'F՛0 z; \S^0X%"RGR\j-3]$ޘ=ݿ[0hL"Q}bץ,F`V qRM*]^G(˿InbZK@{,˥P@,K`%8AoB0˙E'㢅(L&n!K!X .0̖~ 3D5᩟"Gsu8ΕJ&gX=Ύ/:7ĘM؏q',2ЪY&0He!IڰQqZuNWe'>q4HhwmICnl|{n7Ag{хQc,߯z(YCrD4DdTUSOTWcP)DUR FL$RFK#uYFck-RUJ[*ܓlUFJل,t&[6'S- i)*eE QU3H$5P@h:+%:V.@z0O\\ItPm~CnP^ )KP/0*N+RZw{K%K g܃r璷=9N.YLjҊ$ϴ#N}%=2|ߤ֠wNҿ."ϖWk2{}RfpGUƷU/b,fe!&)]â$.ս(mwƻ]y.@,ة1):UXթ"v@Ka(O%x%FҼn,B|y1߷S_bO]F@%A5-ts86\5,̻/tz4 /fpNwqwGYuoO-siu?_;b:h~_kߤOgGObeM8VC<혲$FAk+@:McSK|(?m{.ַ3e }u4O D 9'i$<5ЀHV4Φr6^CE+9';TV)ҊrNU~Zqߜ0C%ܶ-r;"-r;ߢwi^BuXy *K2m^cSW.l-!K|>VX O6nzy/m0X_!peZC(k 4-ŽI^K[(4m2A@˖bKbuiv"!+s*.'@r Nnd,I8tPEq>6>G:5uIe`Rl}Ɍ GOյp%z[VTvA%<#&rl9'ِ9Yt{"ijYHF9t=u&ȇI=h2-*9 L >mDU ˪Qz.L79όRG-˼m2(W-Ըˋ8~$?YvEz?/ g"}Ǽ҆38oF7r)]XZl*c])m|t^+K8†9֕1xԵ%F9N8)ON0B"ZW4?A,i黍' lҊMCpZq]yvf|ȼx=vs?7A|zzu=;g,^M0 HCJ5ozm.VCU^N{ֈ3WGϿ)f "Zf빲ƏfD ]Guϒ ]GZRrOYE,= y!i _9#ZkGDf,3Cgv;cN(󐔀=;0;RjsxyM"bɎZ7(2·|]CɌx'ZC)4C>-L힌+ڵaNOiE jH}8vL:͢)CvHCD UxTYvL>J)F4;%ppz6?xV)6AyC_S\tiJR &B\s10.xs\&>/Vʘ)ɜ%$*NBPc4e]G^6" UK_+k[Ú&>Qb&>*y*QxiQ]ʣAxIa|BA{P̥7tԢr*5Kq`UZCJΕSVձDkU!AxDAy'LVui+3Dۘp dFۚRsYM`sbՁ| Fo̷DA3ei$2XW27W՘J>%8QyZ s<Md* QY+$NHm轶 8'C. JJɆd8@%_杔/ pR8L VH*&M r~yh'y! CdFGuPIb5]P"C@.bjԎZ3[9GZU@O'1[7#݁S;2#9jASMG S;vCJ; jG@-œ$2[g Scv(@yn60OF+сy iڭ>G/ݞP;'t|c 4iEӞ]F9?c=r] 3nx#8s)ciX#e)335') cv״Қ "h1Y*=獎j~eZ BzDW-ic,,zZŌ$4-4Sԣf|j|6"rS,<^]}QYv% YKHW~$'KC@ &+_ ]y[Y &y3`LJV,i'Q =+ѫqƉd|zl4]EIJ8kɁffz{T*_PN:ʐP!}ަ$F>^M^[ MyA wl@/y6{ p<*guw;[x«zةz5ά>)%Nzu%BI?\Ӧ*57k~-|z1gg߾я͉t=f4/azMb3+ocM7Jռ`q̸Sy#}=v.O͍}M֝x|Dlhk|^žo}ӗ7iLiE9tMpwY\ sۂ@5lrV0率k͇lve hXVxs=CT{~Ң; [-?&eL+^O&p=ilH*|}^Rɠ K{V 6* ״R6ϸ{h$ AP ;p)ɀvd@.$8\)1K("G J~ k2*4Pg&sP!q4%!GB7AT-ATCڛ*k"#DlQe+Z*3aϤz ) Q#)M(y3 &g8 _1 g{"2x pqu9u^T`FN %\Z,-׋s\Pz9^--\.SJ!R!ƃe ?=áMRX޾-f] B1ͅ/}w0Ճ7VNVl}; _+rPipv(3in}c!\1w~FjXoz W89< v\2j$޳$ʷl!N);"/?>bkSevVoZ[[reM|s?;?Lc:'} 0<~"nnJJb+71bz0R 8j.lP&W2 _"idgdM2&O8 $%anܸk +c;ýd ӻť |z58ʀHa-ō^dQr'@q*-9qLY #ي}[&KхojsWۺply`>` ga515ɇ niW:^&+SYmT $x 2sG iU>8> = {>ĸX3\*uNg5ϊ/ ю)S֫xS?zoc gPiiW?ח:a9"<.MZkC^YeBveJ؞ڭTW`Үa2*_a9.x<U pb0„$K2%UX5GYPlX2_=p1жqXezV{кϼK]%*(T[ݲFpj״]%0gD>UͥڰCʒ _JmsE Tt޲sTKH=A 78ܝ*\!>%CA NܑnW)}@Po"f {!{ɓ<(ȽLQX޼.SP@l>-|HikAXS AZ!x9띁-^_ѭPqX_Xp]8QF?K5|\uju ==%%V/(w/&()k]w-ɾ@N<3!$b"9Ԏ;zi:7k]Q:ӄqWF8T%qZa !~"]~ !Fڳ)!GqZ9ORZ>ߣ0.Ԥ=>c|nceQR0&QHdV>s('\jmҦB$Bƪf0J|5jAcňpe Cu-|چ7QdԢv"Zd[!Wp&{tGܬ͏x€+ɡ]siXKWe~-^IM6w˙ݟ H^2*]LGj7[jT3HVJ_wt"`F4l/.,+ (J{rtuP@X6<'-]D>hT8T}ۂcw*|WnceȖbRBׇϿv<+TF֚;Ÿ06ZPj x*+&ǘir~=-J%#[Jب FjIp1*/8bW+^Zsr/ `[oΩ, D^ vvǐR+a0x5`|!ځ;ky(BRk+TUIl 1=*=VɌe}̑3"R1kCY}DQ:92x:ӆTIƮ"7)ɝNBiep  L1mβQ{P6KN>R!I f.‚PHT:Y;+QPeT{_@陉AD |^}LWm[whŶb/!]5mpůq˝p񄁓Յmb\/;U6o)V?>s/Amիva?TI1X`*W_k ,S- yzWe&LZ1 UZ$+$h.-n\iւ1(/dA(vDF w. xϤލT6r{@jWmR$c4jC"\ܑOBaBԚ#I`}'G)YƚJu.lbq NQ,aͮћ'eY??QWGA}_Y~rZ$O"a/7u0uRSS0B~_'t-W^ bU%0Nnlt>,_#Ut@ȟV^kUezv䖣B]_߼ᣘ&F?TcA2(5ދ#-x-M&6HE16nƫVTͭˢ2GlJSqSִil5L4烕p%4xXԃ%-5Y:k~k𭡵\QC` Ӛ>ňʴ2Ͷn">x .V)=n?M؛fEe Jz~E ͙+kKA2*$M6ɐ\+wmH2O؞`dva'0xx֦v,9Eɶ.;#WbYJr~M;Yj ]Ŷ JIw_ҷƦ(L^F$2]5,qJ\DLEE/D2 p\"ZC#:*bS& 2˼&i`p&{j62iM+t;ᆬ@GWHʒ62pj{)$tb\Mlo۴W R(9:-)Q%hyݻnC[I!au &S7G'Z[HoB'Ba 3-c8U,T Vp)׀Q`ML`OE۰^d+6r7]ɈdI# 5 uiG滑T8lη!tTn%W5HߥZuљ4T._wϷfyy v_|lZ2ӗnupYʥ 7..f}j|͟K;}kw]VT ydn2J3lԽ x %gN:ƩdxA>Lc 9U~{ɼAh%:ն"$>?:).Bw*tgDu$H+BBPSFT>`n5ZG#ˆ wiQ{[Go1zCĹc$aW )db7ÑjQ őpϋ(Ԝ'`j . sr{ɡݖCrޘC@-2u,E I@*KDk?7 g5p 9 nR/I9k݃0 j-7>l"+qV伣4S?. & O\/Ën'|]\w̥NZ⬻y?Oz1Go0̉YY{"0x"0᪊W?W $qF-`m .p͝vnEӃk0Q# #R t5#EĂTue +}jZ 0ھ~5W@t]/D45H ID`p@DfN$c5|Fi:DU?5|^?9|~n=R.DxCVʰ1GT`n)spOe4ybT1K:1) (j2VVM>^PFDxCLpt=C` ׃߮ ڶdG6fǛ[@|w.fPK;?-_U)W9d0 @jcN'&Tb=fy\k<CЪ#zPH4%E](ۛQڽOKfC3!T8): kHe9@ tm\w6ܺЊi}\Y=Ea]x+~vwLrAÕs eyK#mӖd)|vxzӨV\ͿeiZQ*;y!-`7|5ac=I)߿@SrJ!EmB *H(_~F-~1ei/d,+)̕6ɠyH$uq6C P>=BiM*_5<ĴYO1Be0t_6_Ӄk0/2q4_r8-_`x J)כ fOAAS႖[מ BؒBϠ(«QHϭ賌&;)pF@8.$P XehlĐj݈l[|ԒrgITdpc(8G-{aZhЊSmJ8 *kʆ@ioSSzpU= j5]QE^,k ef~)Hc q0 =GwE ցʡEPiY{#KL5P/ VR*5mf,HX%^AGK4SƅQE) [LA臫pJl>j ݔGJByOVx0s؜9lY9Jtޡ`SYk~GvwLrc\\BR^z򰑝01S|xz* )IiѮʙVrsc)& r0Aʑ( Ũ5Pw+9ޭ7ݪ)G98ˋrPLjA9%/9qRgT'Nvm򚋛|k)ЅA$dr,fNK~z}{90W! ӌ |%dzTܷ1?hNS =s7dP t ̇a$8dta~ΪnՆpwiBNj@X.>WRHf"~jv 63gr%f>aR*>(>Z~5&uipqAZ}!؏ RcngOl[?b*UimM7m@lwMOoKrm*O){IIȞSn)5tn9pms'4F.#ƺo~au57C@,S,eU@/|5/tq@a:GS5GnԪ_}֔8O1;z_P*>u.~%p-5sX?Q~i ܶIDR7IQ#~ݻаj|i7xռs5TRf ]Qڽ\Ab;D\LA %v% mSUԑLLImLNE0rbPwatuST#.<#Sbn(Swx'LO0eƘG]Ke]ʉ1@nYQ7BrӽIIJgFut] IץPԥk'tSF$(M4c높|ki^P);ḯV6*i!nq#*n(+ژG`-J]h8?U4Q2hHsOpULa. Q -Xn}DUE*:E߈A?ܭ.1,o7MjB_ b! K@azDDP /ĥXp8 FT3S J%TΜ(0T Ar*2V \B=FkH>3Ta*:)vo%=uLMV-GR+A9WMJ#>IvNjSSV[ik#$o]32I%UaNБװny8G9GĝcUNj5oًhMHm)ZUmJ~ #(0j:B4ott\g:>K4)u=m#E$Z۔dr|}HRS H(9qM R#Q׈6ƞE_@7n)972c=J0KW6GN C^s!5 $*J束0ޠ=[(&I%YSNlPӘUcuk{= 9y!QA71<2e; {mcN!x~N聿c[Ȫjw-@gJZVt7q< 5tvK-/X- 0Av :cG,8#rΆ,|f7wK3)DO2 A%zAZqdO}0 ʝZk }M-(Q 10IzV˞dkU[:AaHVݳ t{Z1r@M[و "l+&J.2bskB\%P.(TЕFj~o=zP<DMp#X6Φs-Gk@]M guƬ3f1'bvfBqN)lͯ^ǣA bTݣF']GGrs-|\/4|2 3z Q*13:>7a$/" 0*1k٣G}QAj#t $+xL> 5X:L"6ֳ"_q,CY NP:+(:+*Y-@D&M_ߥF(~}Rځ"m9?r| <ЀpcF$Ma!aO\"1Ҁ0`J60 &`Uҿųi`2Nk32Zܨ3IɃھYyemJi/{N.M{)A>E$܀J-a)l0jxo*xL3LY# M@'( BKՖ"!9{͇\2yܠA~HLOu9 CDt_c*]K#h,I402:ũLqcщX[NnOmZwO``ynu^)xwR{[hX=>krx#De?/vzfMʜ,V԰|7?@wZn|{9^D7~/z;K VJ1S6QDZC؁Oͧnfm2s/G ƭͺ勘!ή?iN 0㘀6%[n.V rulk}[mY .2ev!]^cP.rXm͹|rv%|ۈ Q!%+?,^NІ x\]d䒑A(ꋌ>{Jƞz"㕜GPbygtNyN@%7Uz1ד8*8% 3.6J9n.QH3[7|Ssc>V#U ըƝq{OH׶ohL7M^F9i(74Ɵu`#T͞sG<ں9Fm5W-JiWcDm.G3oȑY>_Q#$EFk8joP7ؼx?- \^b3 =[BVbiq>A"\sA0V@q$MB xh+q!zLͤ5Ùܟ.=#m1.O|cg,x =? EHy4X )<(i=a!,! X ``qQׁC2MHvO{iw\i2Or2, ҇mJbK.-%%:<Tt'.lxdgILk!:ץ:kLR|cY^3g Zy yu:}Z_D4z @#<̭ R*`5h?%)Oo{dW0+T|fӹ=4]| |68H]n"@zN]-P3r/6N.ٿxњ)9N`Kp󃔑4;f &~oiB?nVj$ 22%0ig\6Xmn'8 ܘu+Mh^V|"ZJ2iP[7IxAVˆ@lе\3k&Z*J*o1W*P-s{QA MB7ң{8q@gw~eF B^\z>[^?W N \V;^z]@)w)s"򟬚M<߰jsh %CCřG7Y7J 2NA4I P=>]z$?v% )Fqu;y %Pybt@Jfv@Jb;RH&PI- Pez0O;@K}@Ǿ6i7ɢz/p!` v-}iq›L!0YOnFxLu[ '$ N6uV/2ݾ^če>\q)a.[f.L{U&q^8Ł[=;[2M&Z]?m6aṰV&{jfv \kqf&kod㊍ T onluzTy VnO_%vusc霋qKᆧBq oyxXb=՞?,.1R5 \yyc H>P}7a3flf>jv F| M}M!o)b2Oes G"wg٦ڬmHm\(yQ8]vkf p6sGervҋBa0%$JWy{Ɋ%/u@u:]O:APHVBcAjQA'0L`5AbaH;|ô/ )BՎƠj9k.a{ּ*ܛ\p%@+Vy8w7xkT*(S|*,bs-RC.Ǚs1m}Q̖R}jʳSXZi@휨gONo=׆iV+~o{VxE֬waNs[TNڢP i=w1։C^.N浭\fChh]tBpըxNP'B:TDKrT]oxeST(]:5W~/d+WRk>I,\ҏğͣ]UVH3tpinΠˏM%$-KSp~#Hnpp-l9D CT|7$Pe,1ޱ:,Q!9J~z_E{Yhz>& mvszeC;cマ&4tX[FI;o,oz+6|&i"7/~s7W6._qr6\vZOzmu}cG|-|9s?$suуǞӄ|($Ha̕ʵ7([Q -s pfs;5ٺS]NP%}lz;NhYЊ#4o"`BF~MV\>5T |+y3rK=i:~B{AQ#7lɵWv6MRD?[gZ%@N90@EIt%29KRF}^}=&Ii3=GʧqGg`柢2L1")e%>΅_ٚ8M T.!S2rܥ9sIF>ښ=ߣnIaT^V>A2Y)BCKpq˄ #xcƌgYH@(8 0Щrhs.)ZIO1KvѤh`uI x2[:=Dk(b٢~, URhb0> 12H0OzP~+ "ag E\I-vd-,,+d@8( JLūGol6s6C¼[E ~n%[D9qd_΁Ll~1\J΍!Xy$ m$K/3~ A: ,0=H{v6mTDʎ>U$%QwR*)A'&i:uY{1kLY |H J#%a`c_8 )1\aP9/A NG4U_m宓-!M?a$ ! I(H#㏟3DõXѱ#V>s.2pQ{텟% " As)dc, bO }P)IodLx$9^(0PlzeGH $9wTXHH8# pX(GE`2!2d !8(y`E/Uq#W8Yܵo̠C/8e*^rAv :hPJz :tw{,&d ^MA`/I h@ء&~%($J$R":IdNA)i %Ju 6͡^'`k9}1HD̙i2&#&#I*"$3$)aH%|ҀIlJ ɄGGTIBK< fӬTo9hOUjH@FCLJdB >Џ=}_Cki3 49P^ %c? 1JP ,=UGI3ŊFssǻD?8.8AqGPXg!n]7 jiS>gGnG 2Ϩ #"ԟT:Q1 hVCWj%@:B튫i 퇆QeTDt^JAo~\ڷYK]?q$s[y^xAL2ƭVӫV8狀67 Zk{iZy]Г!I ?N*N#)zf盛~QqA Z@{cBW='5N n]ccH%?yurͨVۏ9GM:g Ӫr0Rݹ uGӡ t*/u#8nۋ9Ƀ'0 kծYe: &q9K+Džعv>gq9+ P}S(LۘDGLABµn \@묿^sϽ 88:,._\hUMr1| ޱgzٳ0W}M2Ʃ h[|v\t!j7B=; G+­Sk^d޷o#TqltlwH$:2EQyr~K.:du c(.è/tA.6M)nV(4ìy2dv8Y-^xӿ&vK!iE u).И(-֮B^,܂A78!0+?EwםtFEx&Ƞ 80֌=ǰZ i&q*U5y\oy:6N (N|f^cqZ!cF uW{ۗy{8L?lio8p,*fx޼ؖc${}4%!:]f@@v/5ʔ3x/2\XD]v-V[w^j\%R PD@N8tӑV9 V c3 Hw 9U)4ocLL9+mن5bLh@u8U%{j2a@lI-XPEOxðܼk:ַ#Rt8^GےFWb"$sQ*XL-Pww{;vmt{Q9Ό  0aJF=ψp?{?IFJj"knxTpbDB&N?8Ԃ1}~k¾R72f0Ro>qͨPT :\?uHip:P 1#})C:3R*VnU: ߾3_ˋS /A5Qyzg'،j o,^sE56+B|⁛uscX#j\ yz_:?>ތtVM;_o]UF|gYZÆJ vs$xVc2j 8"-/q1ι' <0ؗfadf Q -⯓xRkR߾/nK=%1%  dH8d[Dz.N IsVMv>{[kGQ 68*=y+ Z=~EQ>m,=VKkkz䏲ugH$tyTb'_yƓ~7w>釈 }j@oЙxN c/Hyܑ?ViVP;m2Uk# dr^䘣spUpms^Ss )%,>v3Dx SLABcOBa}nFSuoI9GcK{?&:5{ oy8rRUXF8)O"nAZS3*y3rY^!:f8kdz4yF,5|<ĵ\> ԗPeey]Z%i~WUIF$fzMr bSnǛ>g5I*~$nZğL)=7E< a}Fk}FZsASxmo{YZղz]? =A[aۺJMm*51{%-Л(= uSOOwF{)5bi\ ,.7_^)ڽv|Rw}Ӣe.d=? OOQͧ?#nJ?J/]l*\ͧ+A9g$V"E^ILe]e?IPʈꐝHDbΠr{v*!P~̙Gd'ϳ;g& K4'al(W܌<sW&)1ZrIqVj8-)[6sU F m51<8s q;ԣA cGTu!Ǡ`w-NO]s㵺s02RNt9]uCQ N^4;Z2 CN2"v9r5%!gHxL։NHvu$P$5"mj9WC`( >v v5W0ոvX#rgV5llVjs\xwnXCs۵$l`"$luꔮo 5@­6?z vX&육 I/OqT>jZEu<83Q>>VKyp[Jv7Ne,o!\lRl7ml,Oam(PN n~<S͞RatYjmt5µd!}1q[Z89GpA172uzK֪l8NJ6"鸝lݘ1LYqHN8:gCm)wж8>vMe0dNpbΘ׫  ~=F.)GϖVNk>{4cٻCS۔qR֬Sv)+pA j|kGH1 (nc, pFMs ƽPh&3Gyib1#P 5]3|^x|y6'쌫2 u݁H,(;Yl|ge:?}̺Y%d=wiI<+_?Vȹ-m v [/;"}yC%k:i\a )btqf B)8C goS_?4AuEZe@h={ $ܾ!w!,#%Ǎ0.)V BaN!l?p$P%v25B a‰1'2^_ PI 9^Tb,b/tڼ|J q~\I.xa b4Ko(eW |S2t:np8,׃I08]J,̋5P2sE7{Oܶ_A&o jCUr/  $ɐERiH>n:Hd*+wU L|sWҝW͚=5rJlڈ{blhC~UY Q1mL)7 fmp),'}媅 [w Ӻ*2BeBscZi'L-1ESܙ+?f_eXM+1 ۻj08S邉yCPa<p h)&0>SPy[s0 r,Xɭ2YBݜBЧvqwb I9TPiI%q5!fKF'9{Eylgh\+E{ȥ8S~",JljNt%Tٶ,ݗ X*t}%OXv%QB$֤W9\甠ƳR xc%n-x)`j,funުx hV,T#+cld+X"0 el= +Kyl4iv+b sN^îtpg&B +}N DtU+%΃0qE`QS̝%uYk̷//=`Z+JLRw̖,xv>dPe.WJ*uv9 RS!"V0Bׇ<7_ }%xMEeBׇ?d4bwGɝ'n,Pvx&nvgJ9{w@ =pβ^_]ΐ%3rë/s LDq!ndeh7Sy5.?TAԶ`̬,Ogt-F́,6W=E΋c`ͻaM4% σ0[Ĝ;J 5 ܇;//aW+NqDv2ic-0d3pM#K t5}C\,&b^0lTЉ]Lz`ף~TB^ Ջ\_8]QJ\IKdw7,(Yإ2 luh|9<Τ2koGd [H/_\~\n{:ς//0mVu ŌpvD _V_| O|ao_[~95_~ Nwwm}+6w0xpw(DkF3ةsԵ`gPΥw /,jX9oZR;E(~C&ks )?"UT lMd}}MDTٖ}Xq1Q|\e=DƝQ]g"B:T jZ:TBYdl]7~>G8vVP<=nyZ ~ v~'9ww'-1jS&f $c#Wf49z dBٖu%uv|} iu󙭑Çazy{:zv"+ٝ;.I/S3 VGd3'G#_0 aW͖uu^ N~]7s-({4K8ԆԄ*ih1OD^=ͳɾ>PbFNB \+Z,jQ g@^t j80ȡZ $%X窰`>+@f4w6C0;c;yγKEqdhLT(&8&4!RlM*EX8RaXZ1%Tz|Nv0 UdG#%6)+uBhUr-Kb=R|J(Ln'T!Ҙ(Y/{%Lvxa\,esMZPb&ѷ3Q@U\-XHqc9Q%1ba,W5zVTt@]c YDѼ_C#Ujdej܃8bWH%>_!Bs]%(-X}Pbr3VT;__洡jϳt6 ,8U&L>|TqSpjP&fS(ł(!DZG+4#xcS)wA\J^Dœ32Ss ]=KҢ>,~^TNu"BSj*"Й-,iPİ14 ~3[}.l?lo1Ji\zenjZ[^Ԯf=ݼjW{1NGjGӯJ^73:90b#:^BeͪY0CCK cePF1ix]n`i 6p%GǻMTܖoTE*#S4yC+K>>8+B} +UZݷv~9|^U$Li$%8}G*\6X&urK *T{"-KQ!vl*sE:{ȷS4FYl㉙'y'f[_cqLIbEѥ3JFGsKuSX`Ĉ CSX#4;j DIb)T| 9|SfSD҅IT9%LEc8PuDΣ{u(sDI)(yIHQ(Q`qq8PR:T|2$Œ{rOj4S_'gas$D)*\%@mL R뙼"tυVPh`;gNw#/sБ/PERhL3$Ỏ^['!Yבыֶ=\FȽ*zT#H$&#h[AOZ@)@*I$6cX&"RRɺf5F\t{lVR2k]5 .ƅ̎*ҳd[6HVݵHOM$o/}e{!vע*^SD,AEǂ"S.SJSL V1)dTDk`R h>J0|yꪮVkP-VoFX﫵Y%uxob$H` yE<7À zgz_rqgP6ŀf:ŝY0/}aغI{>%*ɒMU4:nYx;d` ]y?6ZS 7ntw?.</.s4g [Q?jwkwY-v`;f}U$vG22x JQ,%HϭE Ej~ssZ*Z./:-<_%j-,گkU]!\kóPyy:ݍ7LJ9n-"PAZك]_}4qyxYT$ŻOL|Vkħ<9Í _Ϫx2p =<ޘxmt4;pq]ud͋2,(>lM1WP&>}8Y:2c 69'%u d⺿VέQ7O+ NKvwFw` ۖ|7`[.bX8#o|8gg,O)1zvSA`b <|4;߮GknN>E1!7pI rKlP R^[9Y&Y{Pn؞yM$eTq=&|}ѧ$I[5V!:i1d#\gr_t N {iz|P{9 ZΘl([rkB":fq]L2>p(1Ut&<&o s&ǐu1$қJ 1LBH;"_tҢݤ@歋淼67epGVvdeQ]wrmEYk%Лdr *p޿5 #pS`6}G\i ~k'v Utw#1B$ՀEOE`  vEuA \7aܑTn-{ [V]e,lY%.hS!fJ-T`Ɨ#+`<W 3DtS.nv$uhdԮve&t;}t39Ҝ ۶XP^{WYAYקHBT*xU R W&EUTSZK đ]peY 90𥒘rrx0hijvbkLդ;#(i}A3Z@Snf9RK {ŭz"]˙=a!R ҒH dƷD5v1tZ9$BgڅfqzU۸PyҘ LZqVػ^`"$~[y?ݳdޥy-WfIu7S݉mQ][L! |*`":} KgDaK -66:d[ D7^[ATAt%IoöJq:\4qm_8уFL MM$m/edаY$KWJ} t"?.}  x89$S;j.'9t[ls(A qX;Cr<]a7Zl%w qmU0KRB32T1k]HTdκX$D>Ϣ=#=M i{/$èDcpcٖP;%B\}$x J_z=ESz<wƠ@JR2A?/4vbz[kKAH)Uw)d"L@MeBz--c(PIǩ4$㔤\;qnz@hcЃ&V-}R=g(aq)B 2i܎5a|3ˁ_e{CՆnTTs7 .v%̞-#'28í厗&aK g910f(s*'X+,*]`N&mܮkb3Wj6 p|p EX56Oy~0(I 6nx R%T8Q)g 2pm}׀kScF<ZqTrk&Í]*]1kLynx|&#ޥ,j^# MaJ J- +5&-veK.XNy@!50|X:,4tl 8yn%{^Ϧwl!>8**&DmF6[Ix`004XBNy⃇hh"X H) p|DCf03Qɍw$XZ44 ӝ*Ґτ 0B]E xV(!~@vP, $Y b87J< P0 $q!1ɸU!A!ƌLdYעhADK@a9foꪉ kB)' ڙ!!g3O\o-]WCt:MqTCwp1"+pIcF&z9Vsߝxkol=d;%"vbqp6=$J9Qx5Hv _GfIp<}zHEOM^iÛV޺+}tߖ-2%:Ic;l@4óINQdCIڼNP)fynkEcIuxX6g>>ѽKq4Zӑ'Q<ÕG/$4xO!uo)yG.Ӛi10m6w+>WH:odo.s(f3̯S=G!oz!dS2}0!;&(0d`AiC_h(BȖ+!jzjGgR Nޗ\_^Z۝뭢"$T)Xt/ÕKR.[}e&ۗ5?|T?xY^ Jav[ݛk|>[qfg@DV,IRQӺ q拿Jdh<.wlVjz5 f4xEV% %4YwOQ ),48:FEV(?Ց,iC #ʾK&^1}k}_H ݄@+IZVhX/hCA%%VSq4 Rqb.S(qabH!%F<Ԣ֘QI_i~֊QѪfKE2f5l[Y IGSh_:M€)* pዢ4̔C9N` Z0;m*H"z<ח%,5Si6ၓ8Qѱ&;܍qm܊*&;PWlvtzMG#J峰TBL m+KɱL#~bT6"Œ`Be3Bp'R dvx_i޹*-ce=$ДMփ֘5#(A !%ג{mp) դ\RF(^jC2NjTa5R;zW be% dzl-r J5e;bEÙQޒ1_1SavuUk* 9ZqMq: ѢgjmaeMt~28<$:RGϾkwػ6W|> Iݝ k2N3 :<)6 @FE $/++2+swwy}g=쯾Z̃_^>lW"uR_(c5DpKM/ɴH6CRo3{*Z9~LyjVy?nw*TY(%>,Pv$!{.%21d#lLVȃ,)vώH!4v+%4[u !{.E2%v݊VDv/J,^˅ϳi I@{)U:$9㽰 m|"H dZi!6T(zSm[Uc_Y<ڼj`%WrzR/ح1M\4M\4ic v Pʿ18(]DL骇B/RCbc,]^uun rBo;V9bGDJ^=y۽[=Sc5%uvafZ}zpH7Oc{_Z2zOʃZk_u{-`. i!8Hc5H42%(X^$ɪS(CΚgΚ# g5fBO!YʃRmb&:Xcf s-)jiIQ GtJhtM j0VLhvBB\DdjW6ŹD[)rDt6mL/Q6\Z1ڭ s-)0wnv56f-('"ee QSD}%dF % :i`M=CXy|*ymow[mnX3엟_..>wAٮ'PZqk9X_|8h 5DPeHJhB! 8.@ɩdkNÝ,9\k^"qj؃Q O$z6kM'q#9tm\j};|2.$dEH$봦ד8*A蔎F ݊ n]HȞhLi6y7wMqL[)rDt6mdibBs["Z$SJ虖M}ք6TFc%'` #2M k,`h[%`a-BHpÙʹ@]YeAD.%s6F 0Xg)5 ^x ѓ7g8(AcX8FV}”.]t !{.E2wu&$NfKy#:cnY SMnńj.$dEDUjq>nQF'Q Ye hU o &K{t OB#sm ؼJӠ 2iރ4RËPJ(Q$42;(듺f&j.tc*4:" ,e9M;(j7I'u,RuLSuv$9VQ:"1L'fu*~x.I` K#@bS3(yv(8^Lg$[MԌn4y7l'{zBS7#L;39@0ӿ_u-v0 *΂E4X4k֍gݤIWfuu}fRp>㷫uw_Tu3h.޿gdIQZ(xc %/+?.W|۴qx-^[ GĦ:$VՖ1n;oFa.p`p9{ (L@ٽ^x4?yRtnn_凗H݈;ۼ JX#%?Q+O'"?fznk{0wb %1a10X`Rv0xYᔲJ0u>CFc4aVE久.8>.2LcBA [R7s{,Іd,G79'G)7NL㔳(_wVWwᾼK$Uͧgz t O=TiJFG,G榷qz`W'G` JO~ǫY|⨑8}tH9FZTQ.(ꕱNV4 ?`wd!"w4܅Cg!hD#Cj)6ly8VX &(V˦2ƃmH46Ar7Rvp©{DYMOt* S H SMI'YD9[gdJ T;Ty}ĊIoHEBEcW>HԨibu?Jź(f_KXHx,\ eTRo'8X*BgKGa%b _Ca2IN٦Z!AAsؓZq_ mWAEFGoϣ6"R6±=P`1x;#@dz ?Y sj;z$#0Qv$EZ~2Ÿd2D+O/HFIkXz)/8D\P5R&ZYXO$UبG kz*ᄽA r4:Z80aw}znއ궊ЋVkVv!)p N ͋$ S|_vGWY#T\yV1fqeJ2*P}$+8[|;u@eX|yB8hA-lja/-2?dai牛jvfQK;r=:Q 6ou+kQ }._*K>ҏ:ZuK͂N]g\ގ@vrp]W;[:?Msd#!KݴP)u@-X"L15mQwL' GO:yYg1嚌~;Q{)lpy~~}qҘow(,B~ Jk [؝! 1l ǹqC>I>b{V>XF fxDԾbJ +'#2ƷkEa*%0u%bu( t$ZfqL5Na}\&bjzMpJ9-!l={TJ~R)s堗ZaƊLFw&χg{n.`-N6է vBmu0xs8ǛO:~f9NjSÆh7*6.cޥ`̻y ) &CGsʪ<,lZ*J{B*LH{85!e6*~@r+(}m?rݣ5K0,~<\ !( I8=,:Nc$<81FYX靍Lbg-bBvƖ}~BX$T)A2dx -e GG1"fac"iц2~J?K"~J,7˧;~AdGߓy0`3tv`F;Z<^gbFä®bCp0͆Oh[BbDj.^!멫"uc\LX5Hp5]O3e< O6TcL^˄C(*R˄ MLMXH%Ke n( 6ŃDE(8e" T?eK+sXa@qb4$X1⭯1KeܐRPJ߇Y܌EI !E)XDGX+#Z\\?%0f-@荔a iR(B `[5 REdypCm`di>I|n.ObBo OzB0ZXeU6wGmX5kÙHvܚQ&D'@IdeFI0I(nz&I/~K -#`NULu9_qu& idQ%7F `~&rq~5o#Wy+"k@}1SSʖF>o!1ygNG3?Mi4p0rq;=^]^5J~9W.\\ Ȍ38XQ!7{a4M+^CC`A.WxŧX,Du#g6,̍Ś?ϏFJ;SLUʄX9*˽*HpɋOcĂ3` /D&/)Few`t?݅kToTyV_fB<׌Zg+,(eW4PPPe":u p+Yn2) &C( Z!&>88'(b`R:/m@ձ*w1'SM(l^RrWJa|U1+~]ΖV@uW {ɱP Azp7Iq dcw}5ϑ/sܨA jb~w#tw4yI,j`c{~;Z$Ry^jF7ۏtI¾X%՛:֟"'ӥ6dqm H{KK0×ͽjXO0ljnټzwO7[`? F1h4Dʻͣ8Gi3Gح\vp-9ة1i^i_zQP^'Vn}*ov>c $Dcl4nY̖hOCnAngiAsL難Qέ00FQNG{ _݇yDBRƃCo܅OW q1 ǏY.GGx}A/㨇me18G:a9V085BfB!g")ˆ"I7!z"Ú@0Z8X"Nhb5Y6ս:eކ;ݱc{^fuU) rIL,u2-&(R[&Ζ.ә8(!rr:?K7&uMz\\~6^.^}RwLYu^xoyy@fqnE_4nF ut>N̥y=Tn:!Iä"\_bQ=FuSFߨOwO׍;2/O즿fRy2ыjqVv w W@~@t:LY?L~krr+l=>Pa+J1oaΣj z}5MuTc0/~&ߛv_g4׽!K! it@..5nn4?%7D !,\WJyR+ˢe鶆Pmysٮu/yƛ i聮wƣxD+f?h?s3@s:e_+{t"potS߳|^'Ny)͛VR."A;N 1@%"8F0s 恜wMJ_TtS_Wg 4N+uyT@/stVd(.[%%[Ḷ 7S$:ǵث9M3$Kf(_ՊE# \+q54Bg%Ww6?T,&ZF+E sJ1ނClh> yj^+ Ft"9^U`xIv8wIUx ψ v 9-Hp,d򂤤IhGB㰹H[Ė_{dCVg H*ATK]V 9(Bs8 ̃RQ蔣n.*%xVT}FG`RQ:X;Cؒ$>Lh"> @تgf) /j15kN<3M ?cF/@) lV8 a~o.+?m.#ZH3`n.4e{,, qF + ?/Up:9AXE2[u{+&*/b8W5 I_ ˦-ms%Lfu$w`ar?/qrkuTדWK.^^:\a:T0nvo|FӉп+}:k~(8GX]RtDsٛtl㈽״""1JBuC C.,2EUƷ([F7F'et8Ej_yAK4Vp"i9mjWTƈJV h#uP0tKC(xgf1v~FtK֋op}AӋۑ&`t?Wb^SWD\í:1I^,/F%ϗr d,|~8,'?_\+eT-W0ku2:J7a,8 _n% Jg\wELPun㜑gjfg+y-g+dAy]ԸKlcZ7ɋsOp APQ7t4g*'bBO35!J^BsK"tr[MH4;^66}zZ BRThTj•VWV3"};vf.e#}I5ԁh/1P$'o Gvf2vFƗ\4P{?8 0Rdbl`>D4}MVi,q,IVϰ.rVNVrHƃ"&:G(.u"2.Ni- ceD*Gyˑ+e N̳Xj1)F5 (6pG#!‚oCyP&Hr-sUh&t(y*}? OJ3}}o Arx?IыS<ո8UݜF'8)uߛͩR$A%~;NK;egnGn4R_4j;:ˬ:k 2E}:LgvMbud}]6OGDZԸRy~+rƢˍEy`溲ʛJ"&,J~6[}_]ul 6XC%Z`.=n N#l,lyUBX좥Xxk+`xlhV6doa81 8vHORAz3i wVU ΊE` G_{AODŽ#V+4`NBɽvLD#@`h1#E%+fL:f%ΊT#)P&qN7n\y5&ȱS3t)cT#/ Ɉit s M:eE( , (1,v FT.&Ho)H=#ZHgr+ y*88*&@]%:M-fXmpEv0FRQt)n!M25e14Sř t.5!FOh=iq*KAS&W`p1S:=a2[6 q]WBK'IB%Uz[RzZqI &aqL0  (U S( FvL+[P'  q9փ B#1[~$1n%wr}4=MTlZHp,Ǣ%sXUF*[1dJ> ]1b(@hP׹N5b|X~+|(C^t4VW-u8̒N"C~|6Ԯԁ#+2j]aFY$Vay$݈2cCwKnf.fh{K1Z0yr綺Li˯m73B\@>z`>sםd 𝊐|4qMn&6Շ3dj, )o ƦEYD1kZI[ DNl-ӡΪ@Y 6Fp€?oQ:d>PlԴ s;=vD6On(R}G.~k!]-g9W_>4'ec?f{mdciO*UTMnݯ7'x2$|DeHqm^RU1צ QA%}{> ϣS̫qlzpQLޠcmyMMu˦W 5N+m&y>x Lӿ\О6$K=)ySW?̛9]~kK5zxȜ|`{kۙ^>9Vyp7V~H?{Ij:HۺbSۘ$۽8ǃNfiwiYbܸ=G>&|=/<8(ˆ Bd6^EP ҀȖf]Bd;oh[pk+Qǭg‚I;UgТ` FNE=i(߳(L# ]vW, @Fw}!דګ@w* H_}_J$q {Q0 E Qi9(wD(g~6Ͼ훧-JYwʹ*A]md}l6v. iIܝv~Xc\j ͧke1v֎2M~j $ۤO%TKpLM.?ƹιYjMohq};k:>Ť;O&IPw!o~gM/ k=v{:*0K0[M<үmY/J!_+|hުC!KٙMk#1yǚIsMY(9Bqw9xS!")/JZC\1rE[os}?l_꾹QDxrH>C6~8FnhJ:Z l5 l+qFVYڅOܯYsjěg8jNĶ[ODNփO5\5:JiՃ"Y)ܘ~\D>ȱۑV:e pxu^glĖgj1=Ⱦkz9k=wTq ǫl6A:T^HOO1w>tۺY;.;]Xf ?uI 'l/z&Z3{>bE} EuBCAW|Qdjf~߲@HJwޘM۸0~)'0SPp*rʶ`yQ`LA0fC0 e!Stbj+ېDU㴟zN A<ڟcO;EY3MKZӶ.P$˩ih/V`Q>[ 'ZY'(+Ũӆn+HH_21X?De=N 3L8u~<$h2 ;5ʀS]Ӣ[C!ˎZ3l]RnC SӅf `#9uXhzeΩE,UtB퀽v`O3v&3DۖSֵ9mF%S 0i{-87]hS|xCSy6+ꍶdi?m]s>Ӷez9\ gZzNBaIh,8oDXީ>\}S5F1B1\]"݌bzwjza_N֍NޥGqz죙Egpw9X5a_,OXdV#w` )yVQ3 ޜEfzBy^p5g 4^ߌ_P8O7N,tE9|Mz#=Ø+CuIct\ȑ3Z E>4zW6hl"@2?Kka}3cJ4da \ R;Y,TS(ëL镏a*ko8^CQì#woǜzQj%HsؙL!h۪S,Gq ڡʫJyد?xfH^P e2"Mfbw '1d&K4rMA 1HE#c^bQ\y>4QG>,+.N^A g#G{?ߦ`F*j"d_^N]xvbf{`'.RR"%.RR࢞Xͦ0b 3vct}9B3) G\c$V F_fHlv]vVGc;Ќ@H@w ~@c^9Aߏ>{eP4юh& ߯+AjF%H.h"%DP Lpiy۟ގGr|3Ld;Θ_dLoӁänt0$VJV'9Td)2 8H ["Ƹ @ftB_A`^ *Ei,t9"%J"D[:OqƬYgLZKR`?aUaXFxePqT ɱw`=[f*%izଃxhaai(0ӑO@KOFA !sfA [Ú ڵP,_;s% \ä;e9SƨFKARe4: SF 8l@1,C0Y[ Q]Cއ2V`=B6 ̃4jƬƐ(Rĩ,QaQ,Q)dОr1k %ޮΚZ4PSғ/gF4v tS(V8ٍ[8)q|/S6ҏK6_:_Yɹ~;b扠:V? Oo񈁧?J"ɋ]dcۘz&}(+S*@46c&ć]Xyv$䅋hL)2x/ͺ y>[.MD'w}[[YlBZ6$䅋hL)>luSt[.MD'w}[S;nل6nmH H>kf$d-&;>֭Lpi-JukCB^fɔ҃]XnRAn4Z3woHKźL m컵 !/\Dsd`߭]5uŠ䎱uqg9UveԺ!!/\DdJ絛[7QG떋AccTtw&ukCB^fԡ5V߬[7[.MD'w}[US:nل6nmH ,lUzkM=]\ mgWEH|lBW !/\D)"[;%{r[9`݌g7﫯Ӂ#?U?'isɱ=[8^ g˱sgLov^z΋g9ErDIcr& O-a_4ƦN VwG N9M q #΋>;O87ml$3tt̉ !CZXqd3GY*nj/h$t!]Y&-̚J|EcQ*i%! aaBT1 BYzpg.?d牪Evi##aV L^LjC=p[Y> Kx1Ax?b vdwAS<z]Υ7i;K 7-1H""5{ч ^I`Ƌ5=I -Nd %B,Y`'*0X%4e.:*]Q19FJ>C R?S=$.3MM4hkH<C)!7!U NBF@ ;0*Iɑ憧lqa)6PAř y*@sჅ۠E`Ɠ,{%y+X,3!2u ?*Et*R&ҋ{XH*Z)Gy)rY ! D jA\zRsІM54BQ31z޺@aE`ޔ2V;-gO|cBZ VR(Ol†n2PB-=2jNq \_Mt6?52B'wOof|,mn~s7?&¿Zr>m6Up/@ƻ۴U:Ntu_~7==_w_aQ}܆=llk7$u _T$u (q%-«ksx㛗!]qG jAŧVC I'nQ*P(U~c5u\8͸V =_sRS<yQp(GQ@@]T#SY/e< `\^faxבIN&s~l$b}I,#bFqJa5 Q\S@}L阓6\% &bLb *14sPck'xp,@4ӊ)ˆEtl1"APV^搡Br_,D*@T6CtlbMQ&S"  >FYV~,t8Ȕ؂-&ZF+E !cԠ< 1`GXoޏfn{ǘi1jt-[/_KU{1S̯*7V4UխfIC-_mrwyVvU12T+KeYܼ0{ ,@BOَ.ϯ_~?qCwPW5Q%+Z1ncBpw8 L`-B 0ԯo b xb4`zoS,~~;{G% fQ`mUr?\2D[JRz3S-q> Bj\W^V7p*g..cCX ӫy '$:x~=wk_Mؾ/}?'Ah_/^?WK\:BUWZQSaL)1 xm=k@rmN+W`͒anR {-*& U a+< t[&5*s{ʮ\i4)5vO@}\>GƋr(IF؃jf[ﴱ]NS-x-M,iOf)QtGͬ5N`s?z4`?MΦmsZINHܜZ;jn9mM%(A+U>Tu٧HN2xXBEcZ*яWN [)95JhRmY//_2˸1B!Z2Glj7D-]E* 1ȪXF"Y Zr΋T ]Ѣ1[[ Z[)9C&m]œ޴[ڭǔN ; rJŋE;9 +1} jkoz|S[MIwQfn@/*ɼe~0c_d~<m1׮st-tJ,tmPmPû~,HNqB Fh|vY|-E5aX1vn<;`FjMՋq, 򠽾͠N^ ]{Qj]Įk馻mTbrL͖te0ɇps=^[;iՃ(FE]x~죙>VM?Kdg7"tW׉J$$YF #Qm0hDq-Cay.߀~K s~-r倷"jɏ߇˲sG/^&Gf6x;yJG:$-t!R#CTbq~gI'CP>.7En @Q;BPz?Wu(tOBcR o\AW!!)R=3?a蓾=~}2( U]i? =\} #AixDTb8b9yb#)`.cT2SƱq3F{)I M_J,/K Қ|Ho%PzƠ=G6 c"G40ļ+@] ;(5Nl2VcDd]hin]ͪמl=eLՎ_CvHp\ȝKKChc~ C. + R(KDfz>Xlﵵ=@CهfѲ%\zɢhGO(w~aQW@O !DtDqN٬PZ ;i }JX6jݿ'-p1~m]Cy+qxH[H`DZr) D;E^`x nu}bw)zutfgij%zZs, 2j#-aLlG"PqT{oRHD0X9hߎ/HA۩MNnyBce#lt(5ZDQ/8%j-,"XTuzS9,b(P=!ÓpGsFPMJ ph2B%ZPsctD8¶0$e+T'o mNúB=GdwiubDpoٵ~:B9a=fj0Pw&Y}gtrd=+ϪݖR4! ~?AVGHd9WD9ϖONh)YF5c.8Ͳ#50bٌ^+^w[w L4ER*@ؔoOvb#Ҩҕ #S|7W#r  s1cp6xm"sp'kOW B,LȘS\t~NqE\ DKzA 4 0d|,עb QTsЙVx fc)4;"QkFcˆ#Jw1D)8 ́zd6@kX^L<37ͦ`1L3GM&7,Gkb>Oyo&yJie&҉v?ir,7Nzo󜎅y3;Blئ~xWa+_Ӌ3 |5_,?;8ՂJƄ^Xj C_Mo˛jΘ6_ǘj hz6Z^zuBDTJ:94r6dݏ\A;ә#tӈ̚JX!km42'(QߵW]wvvqyG͛ZIcZAvQ-^㕋*\ղmJZwQ:Y:*X&8l(%|*$S$}N7إ5 Wm52Z&"30eG0IAFfR3T- c!a.bIKYst~s6\,EreRٌXA3ߺV 8X%.CG6ZDLLnT@%@}R ۾|hti78s7k?ޜ$-cT,0Yk5h%QV8*j8laMǚ8Xwj&Kχ|W@;8QN=ɾxgɑd=eL/䖰(8ki1(1ef>Ϳ٣!uҋ3 iZK58'L! "i#N$Ib5p% R'A!Ja`$ D.TQҕ':„|QX'uS*Ak^kg -LI/V^k̢tu<n[n ˰ ~I'4y]^W'cB;bŢ4׎QgM/f4`4~MҼYtzkt҆ 4U`ihs4j~]7FGfR{HK"I!&" #L"V"G]᝔::J՜'[>_$x=չTI5b5:& kƱTAP_RÕ5`:-iBE[úaћ`򈫚fq8{G.hOr]ww]K1RtsC\w+|sREL}i:9Ydzy+a1d /K< Gj%@72h1UZB5 @>$frZ,HJ&H:JG[_?BxqX1_JX}Hog.-A= qfQ"FЮRCT! !duҺE+ 惖!xri .p9SQKl F`ÔvA"Xw1@t/@k0va { 8GA(lCXh#[f-\iTE+q+q(۽7 Yr@H/iag)D7ofR"ڣ P3Ȏ#=" p3y]?{Wȍ/E|gqpݽf%@6II$E-[Vj &3^zUbJTVѸARŽ%G a0t&F]2}We"sjUnBj?KwN9n؞Tǐ[|DL ZFE ŝ|!J( <Lcf͝,7r9l2a?QɇMbh8Ox}鰀_Θ7oYOQ-T+YL˗UZ1AK=@ZJD)Ҕ"%NXiY#D۰$o&?w2oةcp:2QI7V;Q>hRS!#s-7Z! 2Iad\\RwTtG|GK*ۂ/DQIfUh2VWa2Gs}tk1?~!0M XP\N&"ЧsBht鄗4+#Y$A/zjgh9Uz+3GX J=tnkP.7/.hC),RP2+78qVf@QSBs3o=)J _NFiCJ1%ZnAoUBoM)b߬uZLTޘ{]Q2ta=z!3;Wo{7NY5nMͽW>+ۻysJ@znzspQMD Oh#^Ozps~Jrƅ6/# UmL:)XYQnϛufW2%M|^ _9 T&GWҷN^ޘ a^ lAM VJZAa%e0&D#J%Z0ep VѡXs6jBi_ruCf7 4:f)Q@X(uEDQSHdoĖ(:RjEVʤ3<'qq)t4y錈Q&™4O8*5Hy0D96ܨ@ %(B89(RJe=v]l6^0iv8-V6uYāy;^@ޭ*d4pU~.jQE&[Hd}鉿ݼٕ ^'?ܽHq#> _#F*-ER݊&|Xw7WO}GKTs ߞ\V,J)c&v*ǙcA)qj/'h2[>)Q*9]G ђ~c6抵O=@nᬡxc͓糯ls9= 䦒0=s74?fH_o>c͆UL֬u\-q[ Wo畒Youg'9T.}ǴV .4Z.Z?:+R[^iϟ~//Elk˯w''%KJ2/TDqLEʫGG\pm)ΥơtKw,Bu3蛕Q+dwhהc]nٷ!Ӏ Nor@F'bF/@TmXR!v2B!KG;7_MS!mj|xT !FT i*DxCB쬈B~zpFWER ښS7WkkiÁ0fo c^}֖wcd-Pyɠ^j|V1Tn#p2tV-aK`9ɳ'V^P jۡe9HpF/{Q|z _'37+p—ayn_(+A:LkD{Q^L/==$>\>M-C*Z|O3C뛋,jS0 o8v>|a9%"=B[u&P;Z e&UEʫ]y⮔@-vG43Zxnk%`5G4>E&ŒlT,zɟ#l[Eb4p 47l 4_Q:"鎚/e{SpyxCނ9 :o~| V^; 5] 5_t7ք1d.jne?eC20XsfWof eD7uQ-i@vEЇ8tM_L<5%X?t9WgMٕx~t}}=yQ=e&!(M{oJ"{U qc&[P[3(ĝ"LjYF'_~k6_>u>-0[ %-est>jfWE)1##ʧpyPlʌ}ZSQFEp^..h+P._qI;/ԆӨvqvNMotyrmӭ2Ԉqd(Y2ęy 4҇`\Rũim`&TՌ\FeV`"d2PJ$:kPTK(kZ{8WG) L:11RHaЈQZA ܗƀ:UjXt1˱ANQ+D-Weu*^%R!BTJ捣WpU&jeI] C+{fT!2up&8ōps"["Ȫ1F-L7-D `R0)ZUݸYQ@o|q8 LD&y3Q?*Cp5/G-*!A8J Ɠ_rL8 2f(f%ZLVpB<kϓj>ǵYBjH/fV vYL|v9Ybdh/Җ1ea6OJ&';[>,J\[_|̓ޭW3on{w^uygaQr4~whU ]ʋߨJϭ(%4,kP.nů y"%S5=1VvAFGo֝oMhSօq͒U:I;0?_M~志iwmj?[TшZDМg k\Yj1 Zl@IÔz>H\蹹gb>8IZp˳G'ܝ>I"Ө1jݍ6(v0¨n#פA.}T90a,Lզk5j"s,PbL}Z]zOImn[AS0b<~*o)om59>Q%Rm6 -' DF(sDhl[sʲgHfČF:"Y4E֧]1jh5'*В(#eK Ai52BE9N6ed|gg$՚=Izmgb?ϴ MmZRА#N8C!k4WZBwTH%ҹIj(ҩ\=# qY~]Fsx׵(CzV%Q&̕,^J*Őuymo F7Xj=)SU?{"\ϣIUcUG7Wg^,Ͼb#w!5yAD{*RT e'HN#@fru;{]7Q̿s弼W*ڠX,׽zdw|Ӊ.NHȩDFQgض#0GBj}"e`0V@*@ ftoEJJejq>QR(>Ss:H/;73W;'UXΫ뇣vr: Nph]9V1 QBY8`4fZwRd|VRצw''^r9^"ؚUJ󸏃gzȑ_1r# /yJr e&ڲגO%-)2'֥ɪXX"# Jp_`xZPp*^- }_mqerQD$7oiOgpd҆ҳhkg\ѣyvd%85z`VJ\29`lC<O)"G򭶓A[f&b1QF©P93jHt EݷTjj rA H7j/Hg[3+/<<{gq:?WLtI pB:n$en7t2C[n;?lMdvOH ʈ nHY5r>*>_ & +`yDL#V<|PEJ#@>F# gMA|{X1v9^!&.*[,yU{&06h%o?cpv~e^x1&Bmz.xln\+Woڤ4@A'q^>;b#@ 30b`HwAo> HԻ蠗4w$O>% f/6GS^k`sE*#<ڰ֘X"嫡,6Y7Ge8{-YrW\YiH% '~zX嘿Y8yJkއ4q6:Çl_ ?o?y8Z0Eo,{(OK>q[D\R_{(nO+R"~/mtܺFEXn鞐p;U8ېN&G|-ik n{ei/-1R2琘pAJU,8g)2ϬЁ0\A3+v(qЈPIϜ{$wY*nn?YwfBӍ3+L 9zz2ɾt܏4_9_5:.|pYG DwHE0b0/gZ0( р\0P1J SnϚgΚ͚@`I 1̘ 'HQY^9+@‚s-fmPӝL&$ |"jwRuN$NRN$N9 A2"!> /b(x 4m(J@46dEnĶv?PQef` J:cI:&cI:&T+Y^։,Lt!:X#2b(aBGSHU^0yN^mj%`Ɣ)&m㜃MfSⱭ>ێyp=ō{&sNb .r|Kiy!2rC.0Bق)jFI΃mv6R "0"C}pK((:9hrHe\g)%E5 DqFU${ӖL^Y:K&JB*8*vO5YVNTз 4/ oTXk%\k:p4l`׈8| ~(:s(TT xIZ*&>ťz9deN,0"8V*(U1]脥eg ٨Cڐ~--iJj^0ЈfKڢD-ȤGhs/B?!Fef8Z]ch؅GiDh6F{A+fkBp0xa}ZE4 QUWWC KM([nn$jl 4U!,"c"HV?.msӉt,ݮlJd=.J9XcvܐW 3Ph!j!՛a*LE#$L$wLeA*H0 &OU]h"(CieFht 暷uK-hU9>9\4{"5W0+ֻ\BHPr.[FlD[qߕKCB&riz q#96kk LrF$=\,Ft |y^(\iK5 g O$~F "?Ɨ"ND(:{͍J&1TT:*hTx[L&[ȟ&Jȡb&qrƠUܫ-@d\\7 M),\ YL3& Bʷ03NJE6S6yZ|Ռˣ۰LC|[֑V[CN $UnQGҙٞBJRc,a؀^Pphk6Ⲋ1tp} ~Iw0r bԄ &{@S_:Xn_+-[#~Q)D׊>^)? 7ۄ*7%*ϧf7eݭS"̉.Պ5=DP9@ Ֆ=\_#R+0T3oNȉ^;ZNOl;&HX+ ĠX)vTh.k8߫`Y3%'ҡ}55]b2`imh/URqYZb.d>7nrgl\l#:1ގZnޢi~ђ rpC.HQz_%zS} @znqss ^MAdtd&pGFO|jr( +9VZB8^/N73?"w٘wvhlEtqWx)+tA1MB|ȩ"> 'SEv-*$*$*$*TXAuUR$akpyVϵxcA .,ɮ"vVUd/?nISQF֗&g3#G[\i+4yTfY93+>p`Dǥ2]VqGSp|"0n&)-X#WVmOE:[cJjü3 ;)^rpވm:5 +Oe{J۰1u gTSaj.8 -B*3Zzl]*tĵa%hA{6q5>4Dm/6!Fa_lXbe Tr[BKfP{(Wm1Rҷ6d$4&VR\jLm4%JRݠZLyaV P^Ke*j;&Ƥܥzc%BPNʒelj |jOr,/l1tj3\5=)R%-Xּy&CG3#`u}b*E/g-L6} '"5 _jO$A}@DdTݶYETx[LdgǙJ+j`Q@}^N̥2gRk.uo:Tr˧ֽ2̿<ߡ!,x*0<k$gg,E󯚬ҙ~-)LBLhI j8j֓N[<" u@`oZ4Lz7KbqT^O =Sz\p~zjr\~]0_>؛]!}lGbG*\0l}·  Z9AGDp5FTqߐ灦GDsS# :ZݒX 'F׷^w[Xdu1P {j9)%تCng )2RKEwc\`OÜdmm' `<&|e)r۴א:Y<"jÈPydo˒$"灮<9KhgadB- 튆~`cT؅_?L*MO(/ZqH5xQ110 &}.G%Pp]νzr #5.'ڙZ g|5옪~ r6*RÏ.y'^l9t(~$P֫ZFc!`%|`1%5Vʂ5=Kҁ2D~z0(/H]f*wϻuJ:z=pcqUࢠo\.T+8O 3J鸜7}up2޹cx9hbBj.NA D{@ &Tʜ|}%<ѴS`jQYIIIIU"crMI$\h C tAbA˨(sRJbvVc/?nJG*ldoDyBiЖۭXf&ƌ٩P 1ӫڢ{U~~5]\?}*4DŪP3 hAr #)F .GQ9TJ(, RX "ҏQBd+3BXIXjbz9}@֠q߀U2Jf@w &P`RFvj"!HMs$6 ݆1L+ִo!Ѣ#krK"ұaW)x/M!Lw)|yy=SK%wZJg>W8^-^zBrstFW/y?~w{- }_mq=dWee>e X OvqGJv\7ExMڰsUv{;e3<4d6+UikqE/<_SžI/AQԌ7{Kv}eXhF2%[G&ysxxEKnϱT%$,g{ΦxaqNY[Q14ű7&iZ3^uz`O%ANK[$OC/;x1욆m@Bu]Z Cq[wytC!tzPA 0(Sk"yhik)Kf4DVw߂pD2o ir2- )wh`Dnx4BV+x]QO.q xzA@[GLi6z@Q]yFW{6 ^G5]%R7a4<4 lŰ߽u'eчF, ݭ_H@q8-_݀+arUltd{7k R⅂hX8^ ^a%2qC<ڣ|d|9^x.ϓY$՟3la\5$QTr,N1g>UhuV{nh1z~GcXIͅvZDkfa`YηFdѰZﰚnjK:ř[SԆ4~pU]kV\ZVzwdb׃xy3>/h?͌dZ j&v)r>9ZiQco}bQgfH|d(u˰ ȿWx}|,n?P9IԪT<`DRTbƐ WERsֲ1hug +OQSPd庳<Ǟ4|§9:+3weǠ.9(M ?? ={zah݂(㜑,I2dF:3Gʔ JȘ%4JKfWiS+w~Cɳ"ۼԡ>4C b #]m_ X}F[K=J;J}aEjU%a$H-'K9LkHZ ؈@TR8&pngӢ}we)E]gx헤@[) qVF c2N009͐s 8~ں^xx3vƇh:G#7W]9]Gk\u;3\Mx"QNэ|1sgf6v`[.3"F;06&KXfLe8`vU!u0' K"R30TPdjRRIf(M%H kPӦ'tR"|V<@3IuMl\1Zq#OPqqBmOHHG[UދT~4xDhf(D *)l.UT De6@i댘BbC1=-\6 3or3c7!܏xV'<$o=D'|>/ $D`g7@=!$; D :^SI'9P"[)tw#6Bw/Xi]x!/]!N$O'֓J s֒H#M9h $#W'-t j<8rJxUl!bHz+x]xKa6#)s]x jȹ<@4^%.2(o 5J2SXO H#ED]x yT O.LUaŪqqGi"9m'abb~q̞vVt_#j_w1+Ds~~pQc<}i ,RpluvF|Km(AwVx0aׯ>でgLl{>iҨч eg<{xq4}Enw jIWݣ#>RK)CiB5/+ "BPpPijqR%3%x"ƚ)X8"B^fY;Qcלc w$(8_:ۿ_C mȯ]+V0ئp"bJ}7VG.UU'18@뤷L&^L?I06!H-O ~hTNϏxϋZ PǷ:s^mH1xj y҉ɋ(HOGr-{O~kK4wۇJ&$W7O_Fwc{x!oWP/E lX؇x !СxE.gte~oNp*%'ao[l```r9ėLjhdMy5jFĿngrX*}_hS9F:9Q|J,Z-RT(ª[Z+} 젨q5H9ðc~ 9`*RjHk߃nVU Br.*Zx"R6bNb!#qK"GiF|ly", Ya 6ʹH(jPT օ dT LR Te)d O8 eB&IH\|x ,*~g.rBR 06NQH a  YtIvR1[Nׇ)׬gUiwD7ػ梒jwi.VS`=w`;W^!5EڗJZ29hUN"UԤg7H3c%8)(O IS2aT++fGq9xb]ۥo`l* U{(Hvvgޟ3IkrW%.jx&.S]{}:`ZS5+)b'S)*>:rnyԯZ{Wt1Q--r CkVRs5WtN:y|]'UN[*Sá*IE#+PvY.*\?_\*g/NNߨj^˅t ]q|j7~LvQ2AaMVh؏?͌dI8wt3A7nԐ\i,:?ܣM#ǨlFRK?9%x G4 *rRSx 0΂l]R0<9{#8u2?9s#"󫻹3<ÑJAd!\zCg#^vb@z2ex4f,g1;~%h`}zZQbZHrX*AfZhuA7swô"o>-"3n&wnoctw'6bwH')gPʢ =8^ ֭|)0#^5;[q$n! pњbo gվ@HFk;4iBir+TF@xt%D:tYDey 㕞X rG _0dhXMݪ/J7,&,خL֪=!ua&O͟Z[;}0j|wviQR?OvX>Z}{pA//.$f&Oy>e$^{:zp0ZK ;E*0ф\'B7yy[&tA" aǷ/ $|=?,7 Oew|d D9 =\2T KpfT@CYҔ1f+nJ;5A\Q'T5n@jKcQn09: T>),XikЗa7 #Lۓ 9/'}SCԭ -T}(;e]7;uӋ`h!G6 dc;vTG}Oxo*2U JH%Cs hk%%8H^P|(^0}Fq$0aeEJP'nsfGKB4``c,'E7x2&CN\ y5Ic=CQ&`slOP{P=![|H ĮRxg:3li\s=Y4|(F 9;,1,owbוl<7b@t>V9kr89EL"B:,k0l0RaNu|ΔKe-D(%Ab[ )MRI : G9moX!{m:笝eTzt-&nY'Sks&"TFuG`A1 gm2Xwl2!d?8DBٱ3A+<3\D=\Φ&E9OpW.r]+ W //78X%"^Q hUhN&~ׁonZpp KH )r)Yu}* ~xp&\$d[ r@:$`8^ILiy$W/q180 z|p }LR9bDz.[[2 8bGܐ<ڊSI;iOC=z]mqH0Dk݆ =|&$ iikL\Ex3N-[T^ }Ős,v`=ǐH "{aJ-Qؔ4eײ֜i@A|^`l[9/kzy Y=ތc{ð~ӳtP?rM\IYTt/ui] @\86q6USCD kh} BLaP!Z_̥`O^ )sXSrkpHxGdwd*S4RSDD4wBY&3´ sBÌLܟ( gPYwC.L9`HOO?{|$׎ReXav#bH TT,]Oji(oI%)ܸjdnk!Y֎nQq]PSEHݤyU KfAu՚}L;Aҧ2sgri=$椡{]9tY˞XWhC6Xx&-{QTZ.ʧSVu[hzeZGm`NG^vJJ f]JyW`T+n$DP(f ")DKW y'?ftLU=j8gg%H?MQ<:h;8m-vTIKOCfi_%&Y&I{ 9 :k$8~5Kr]llb.&%\4|OH{L*(PNՁUPD蘢pQ$W={<Jƞ`417[~;P5",D6.O]|(3gZ`U&tʄF (H@&)ؾ -9!Y-|2o˅r6c!Rݐ>~zVedV#Ę~W|0_?h&ݍ]W/5%5<{twc ޭ?3W/?ףxmh$oz' L~!K78RܳpqTO{;ӑb:좨Cw׏= XdgWr]鯳ne|v';7ľfVcMوaEnhGtvw;%Onok//E7?zyv>a2Bbϟ۪$(*"ROdl&,1 jGt:&)ѓrѳ1ѡg1=l-peJ@r!NR%hR)4Q*_룀T>.@)~:p0OAE$9i/4F/F"|!^eH+:鲨b]g̊ , 欪C4*!{FYWo\^K+PH2D`~ 3!\HC@rActf}&䀓ƽk,NDѾ捊!L(}ܕ18CW3 Rv_"C1d`қ_1@F/ahK~?NjÓڝ}$!GYun5G%B67pFۊ pGcrBRi+`ȫ?ٻBxeC?-Nx䔉э+f$Cp;$I(6x$ak)4TYN6gxqY|6wְZ޺ۃ6ߟΗIoԾN_F/͚[XB5h4"Ơ pI)JV3._zAD0*O5 1::e vׁ=߇|'={+:8D)lywYt"L D]!Ρlw! -}֟$ x%`e{1D%d]n]$䜶;+DdÈB]Fr :٨//$Ϣ,ltA.}ko |/tqiX7No_]_uTkb9MߊStn!iNF&L)vNA Q6K,ys:Kq-0 JuP?۟5O;gvFvE}숌H41f5Oa1cRJ~>7.M\Oβ3SZsf &)ȕޙVvYPa ht>km𥖒{Y=ތStnQiQl /$1; IuLn9(/]M,}{GuHLjӣUhnƒS`3tkƛ/]ZO2 -mo88&$5N7!Bvk 7&y;ֺjنR$Aw;S%bNV}+L2g YLdcE84hhNˆD؈&8qHKKXdX#5Y?g]0}i|y:.RZLlY|ɯ͒F`尐˯;kJh VZ:ѠR(}VuX qb. xSmّ c&07p̤q`&CRHRƮ8,q;*AlQi8Ϸ#!AdVnKcks6d>J)D ") f^*#( j#c8( 4rp("l*4\Lc|&>0*Eeldk)0?u+.{rny/S`>Bk#ߓyz#a3x1Wy_AW͓%1t|M;q0O*w&+>/^^PE2o>Ht21C8x XbtGg}Tb4ʣ4tHF c|h#;cJ( B߲*8ت5ߕkuM!:& Gf⫝4~@^o"UKy/40v̟Fq\w<צq,Yo`4^̬7Y$Er A}=%f0D7Jj5x8yڻ;{z%pŚ/q`n/.H{?p}-2|J{Oi4M[~mo ʡ-j6 (ľ6 6>Q$ؙt9'*c`bL j4!%ܑ͆p(fQCwQ]HoZ)B#SjA4gqR ]*+".j@SEMrthA:'PSʠ.BLp+(P+h]oSAME2U Wv)E(Z6.is/E-\ŝq]"Y==Zv悱NUgY+xS&{9'}P\dZ.E)dy[8!QfjDNϪ ׫BxaWna:3ջDgUe SZfYQR  >;;S;1JkԎ,n˚UΓiG6_ӎ,J02.+3+,J0Ӭc>|%f9TI(m!o[rYq"WuBJ9gm+Ymn9um}C, 8[!* j NUݯ'ݺb<|UW[}k\mʭ P\]JU2h(Ԓ "۔A;w{OA9!}"EG8$SQQYd;Z: #9(w h<#^ܗ~oNQwNTn]) 'DΦМptuLXWvs\ARN^G5svtKmwrK =BSfPi9iaFْ!z! fN~7@9|%*vxu} xGâW}t5vl wƻO_,x| c!I$҂i_V\2O j4=bmSq#2Gpr7cjWfNέ31R*r i ^*?B'Yv(Б3Ml.ߔJᔋr4ŕcĉC1`h MǶS\f 5(K~+st᤺9wiQo$tf~׺ϿW+~;M9h I}kƜԂUgl;P V&0F2=3IݨF^Y07oM%Q#e܌1C­()͛Dcy~}wVAysh]P[6?[wh,)lDeT^"y+aFwğMߒe6S&$Y3@Ѓ €SbN݉W2'<>=V(%>ɣBb*5|'m/!fYk\ 0kbׅjOas!{w!Zl;w^Z)!K=92v5%{'s1rFF p}%#ɥ0e >쐟= Aeo_kh(9gg䐲ۥYd{2!t /µk:L#.i_kjE:due 'HGqY'5Ck\?]nb2*0KũBc1qL2?5-cees.Ssm]^ŵ:1'Up`A`/Ǖqu`}KB*F8A<\U5^ɋ=e~,fu$ǑTǕa g_tS•$AB`7t]Ђ.kSad~0v04Û] ϽSA^@@~Z`_O|]~?t~'h3%PTP`AJE1(tQp_=) 77l D[rY^xN/i1b0& pgӷAw%I%O6Fɓ7J|>pN}ꦍraQ]1P]~XF((n /QLԕ#r+͟J''d`Ly$HqaA&4JC)tdu:aIdoQ5xxmɅ)2lۨ8Jk4=#c!A%30%(fC&,;LcN ^7zI Fs`C~7zӟ^`O6nZ{?`'U[ock3Mt܂OH ~J`#ojdqwE`%fmH] ӰY̝/IC9$~[dx]i)m^:kƇ#EFrP#- >SU`cp%C躑z'*c`bL jv 9 }J5=}O8*FՒFBimJ ǡBTKCtQ{pR!})B=ӡӃz1߇,p9,?(liUڋiz~(\/ҴBPC!^?'֟2nfӏJm J"]=JùXNY_tޒMʼnX孋L:^Η,<rC ^z6c]^_5/S8ujTrk!Fck%saLגaĩ<UDҭ+e! " Gf ȏ5y{o9@)OVh?g]9+,Vxte‾a7Mҿ;YykQaH *sYDwen{ol0իA' V1/s >V`,k^ [5A3JU:Q:+]? G[^3*XY0~&mhf#r3Ny,)m-Eg0't81<.m ]OZgs-|w mqi{wPso^)Zy), +wI+/U &Rk^!&H4{W{Y ]2Dy}I\ɷ[23k[;h>FE..v8͞+$Z5oa;~!WTJZ8Iu"eŐRE|3"HA_kO/~cNGv`YmqI9ٲ[8Iu@7!E1M&`?E ZH~F5{ljFlCN_;qzcGb9 !ri!]5u ٷ'/ۙ Iݝ I:7Tf Dc-QꦰD6}BP~[6jڲtk~G;\q"W FB'} W%W6R6u5[חp5*ں^Q}Z,U-:S}z6oqĩb52 !6 $\t1$O - 4+)FfjtGjrp. G%YZ\46W+$"HYE JJCZaI׊ &Zq"Wv\)dy]Hzz4ku 0Mn(h903D&pZEdz@vV/aՊQӛ5'Tu'?7Zϙ_k!"/RC3Cc!\K+_kxfeĕ+]Kj'N婏h'o$#Lӗ |>z|yZk^ )4MeiAͲujmSm#Ty'6ĤiMkJ "}A+&HW҆!D,rBA# 8Џ$HGn\ƕvRVM3tbr~rN^mRb9R?kw !WwP$?}KAGu 81Yٿډ)򘋘wS'Tcj+K(n<=g{`2Iw\ Jtan{#UɹaV BbT5oLΑw\- :zPmaGqtjHAZ>4 C$ѡXtYB \?aʪ:mOB\KR'?K2;b,dOmΠɗ W?5&&SzY`[5yqRbȟ5CJB ?W.O,C<ȁRpRCZU)bӫsjkyqI2'D::8S{ks{ZO/)h3up*#x{սMȇbDn*_k4*m5k<&iKxyZ)I /5Tջ?C_1ćMnLnLnLn5l4mȍ+,~ CUpef.;[* E8[_ϐ"o 1/Xp@7 )J,OF2OjcS'N)K vn:%On E&l2+m|_E|K?hS4o[C*dSC"T.#2õv)KHeh4S9&Tz%YET0FRgNJ \p P"ulkYbzdjsvyq 4*Mj>8b =fgDTGcIX(3C CSpi {s9q83@~r̭yS E*N4~+|EFr#,݇`uSgKȡ1M9XcLz:5:1HyM=cI1"c=`RK0J W;]Z &>/dϧ&ӢSӿOseW H|w D?|5yup=Ίoz3G?m< ac" ~|:/En3@S~3&/OO0>'pm Ӄ/w?`ƚb{>K_aJHDc[^x%z{iTfi4SI`˳e_k v`g[30Eb28o zV/ {/S?̟_w`4 ۛL?Vyq`n# |?y}YnlL$ow S֡5C^VcO*'!;&uA0(ʼn'Le68K,W2N8XjKb?#Hb)aW{ ()ɻJKM<-VbXjiKF(7~UΕAL -B_6N ca^B+"D3+ ζ΁$IFo9҃Tk1.x[[ej1 ].<)FlEn.5U_Džg0~|ېBC`=V ړ(%#.U!ٖUD-$C۲JH #T <S[( QDw6;yq{yp@Mi"0=ɔc 쬐G&; >BLq /k28,cϸ@6cE/e،)!o l<0XHrK96Ri M (FD), Hq#2n0ޠZ޲e>nCIO|/?B 4T)UbJO79,bO4T`%w%tR _yvU!Y~:$3"E!SGu"́kխHD VFkOܘ$i"5MfI_Wn%jI~$bϝ$Xy#`B2.FQ|ab1ꮘgӇ,{R %L{P&`+IrYHe)-)uV>YA{gW'#.xɔW"'s wStL $x3~!*W)e-MRi9QdY &ٌ>$G2bT QO3H9-n <;(zSE18,CeQ,muVxmȔfk%s?Z;rcpx3AKn_qyD$$ˊ rM7Rfev-]*|WQUkUx"ŸV)J`BTR#[W k9]!c )wM @,kW!~=YqP.*ڛLPIB=Lco|Jd,MUV-I66ϜrxǩCyYC )h)*dź$lΓ WBL7@) KQMF深[8sgS[ܐ,_M:lVO:#- 'l* ]cuUCABd!y/DWe ,zx!:],3UQمh8r!:ǘʂRA/D7M2TBtww"Jh5倠=A\R1xf2sX;=MquʕJ)"AgG_qʈZm]6; E\JzD(92Nk-" [b-vOq#&Vu:MDK+d0!Ch +pK9>Ҍ:]LU\#"_ G!A.,V=w>rK>te:wҸLν tjY{:wB:w4\TQ2PH.9gsΊHb@:7K!}9dB}`Hq5sJhS̓e%Z3x0@7\!߰/;gJ2G<*]=%D5S07a\b+ܩF&Xi_]3Uo>A7~ q{ 빗kѰFwx@Rغn{ѹX.Z}}%[ vZ3J^;\G9o1麅sAb^(^;0͓<RwRbp nJm{9-wk,F_}mZyӖG^AymQ׊G]$'Ts>}~Ux _qV=" VH 3e{p7fcfI\A'DYؑKt"r'~w. ^jAZzF5H&@FQ0/YHtMSs&)7h ԱK_u3+_^x"\QYfj( O'Fs*3Twz$NՄS;y~BE*nR8^Vh$꣫QJ9/ h_}RD2r5\pE^)"Ôbrɩ2B/F2_MP;{/Kw/WDFQFs}XU Pd2Kχ!Qײ.T&Z|yp)XꁰwFӱrN3@JW jo{P B¹޿TcӯHD``Wݢ|e\v!UAuEEJ Ll|VQ|qxY˥ B4X0UWF?D#Ѣ!O7#P^?)S?6[Bz`uƄ5 5O(p \ m??NR)59Zb&LW&P}j]r(ٶ@ΉVȾ$y{%!AJr!9\#*g5]'Em,\c%ARq щqJBRԐ$!*rPE؏hqp fQ\#0K8>\jSě\Oy:]c͐BM}/kTh=8Rԯ(6\Q% M5~md#5~&MHzu(XkԽFg]*/@H`Z ŅR_XZyvl4=EvlUʬ aSvUF O\ eRd3 +=7P X4S;lKLÂ:U=,Kl}"Nh^FfS t! BY?`LF"4U)R@#Ark)i@0xkoq4Vx73D#㬏sU:bkK}~~OMv, 4 @neaū # ,>;Zr|;ҀYo|yDDpR.?Z;͗R>Igṃ{F>3Y -O)g#,D3 _>W}!$1 48f'Iݏj4ǕhLl Sp(ueƢ"ca:.Z2_pr> x͸bZ})S}e8D >Qt6B׳/@}3g3‡ ?cf7˛\#nvp>Λ2|7h/FU){e.^+G@@m_+s? $pUa(IM*e=;8 6TNOGqP>.NGރ>B.q KoDfo7k^,XA8#`!ԥ;K%КyG1kzAeָg1^gQ:aeG[^qΊӍs*INhtƳJ3$ݺ$QS+J%ZCD#­m*sI┊RޑHwB !ҳU- ŦIܜMA l]|;t(*p/%#N_a]GѭKu?ĹBfJ,eLSєjڞ"Ж7;XK[(匫Z/j8qq4QpT<85 @yaCT$T6 ɩ`H:K~lj4FK QNMM$QM_[IzAT=`ND "H'y|xs:rɛm5qQ*q'zK R]n%,,EGe_Q]DPŵ 4J*8aLw{?H]erDWoY}]ks+,~I%~nՖRu֛/N恑S$/_`HIcH`0/ɣu+i8hh4?VJ'C4ӓTYGSZY¿WT@PVa[Àΐ灷>(= Vك6:j jVw?AN0s#YT}}*(o'%t'yƩɑar7)9sE牠w sN tN:~)~;50/5GEr ~‹9J$ݪj{c|vr̗MGvKF%l&]R6Nejvm`[i;E,IXB~GZ/Q,UzG[K1_F>DRDUѷ!Ճ0 @:46о}Dde"{26\pR))G<& B֨DQ*@d*q$3ʼn<8" CH82% &," sF@Eb18%6ld=}^ Xu󑄉FYi#H1.2 Y}4NPpyn4c4!TxFxvpw )h%@m+|կmP/ue62 `y"JꭓܮZQQĹg %c|ʆY΅wcA`6;g硰uUBKRF_urT W"b,Ci)4ĩ)RId,4 s4_ /Z,ݫ:mj`j|nVq֛l5Xmjd}?VeN¹=$_?x_=JlbNAu~;O5ܯ^y7*8Du\<&Hr! D1L $VPa2A (@$YEBUk"!'.P0Xoe"C! %jdZU, ?vx 1#Ḁ߯L1M#=.`2zؑz0t)O,Bg\ZCOԸ}DTA. =wq҉L Jڝ͗o@ׂ̢6%O#Vyzol}CykK~` {ًtΡ6SDq%%%LR8bI%x`8@ L"Lt{`~[]w2]bbXrA0'/9Ӛ,p8I$LS Hr)AJ㈧ Ig$FBtou ƎQP gF3e("d80FJ3# F1AJn&.lB@ĕbEYgDxIdc%1$Կ 2@gf/Q$'.,hh*"Jq$I&HƈF2CeGJOɱ"8A)eTP%Y0 Yn4(Yܚ g22SVY3;k{zn A?cq?~Mr==zCUKɃ<, 6pd?3_?jC\w,|L8(3*cf֏.g;~fy# rK!fbLjc&8z)J9 Pk9 XZ'^hBwƉWO 㘃 ߦ )I'8CS(IJ @ hu^ŸJ( qnbL{$h"z'LOmq(@8􃀈WOFDw)`kGCDhwgLJx41L餯p:4BHd mNq# (єM[dGXz8~pT<_:XK0AHF1cȒD$0 2ʵY~>e'1fÛGMK,̖nZ/r^Q6HY*=l6f}<~hПif*Yv0?;2 ʣ)ǘ/!SGy%}%4ʼ"]P#kE8-XQliGyXH=]Xnf>8#$wϩE;팩nfRއ}4_NF7o=mG?FSG8x;Σ bGkn="GxM8E?} {~J^Uo~ybh%insG0E=I]@ "_)Ĝ[uNZ+6g0//5vAWuBQF3aĞVx+^\jglI5U1#bUOi:0K)OëC'oBJn:te^*W/O9x656Q9wߙ~8ΫaI*|jsN-*4WVݮ$G֍h' - mǺBHr-|ѺU!߸锤8{ZnNh>֭Umnu[bGV|*zSYRcg*6vb"^-$\svr +!84?"!”^*d'>nfJնV |607c)>iS" BnP HKm )m'* %{L|?$ ͭU}륛M?w n{),"R6~#@1ghB@S>)K d% 6}20XC꘣JX0 -o75w$@ZmhKZt ,{]Qv/vWaޫ\JO?#mnnupc6/$\H[yi>8Ũ(è*tpp輀ɗ^g!W࿊J/G@V2u=X}v͒Ys9+6$Ett AڝkF~Z4l@<`15@:fV0mtrRna 4u}KntoӇbJ%KvG.O!iM;]F_XUk x!Av!s.dpQ|s_ {2'a^ jʻwxHDdž!~ʑd}\_' MIWjgZylmX?oö0{lzC 5j ΪV` Q\Vy[wx;VXs 墓:>-Su` dq>ɧ'm8%Ҡr^48hcrkyN>EҜxS xu_`i`,4S>B\hft㬞B3^J)tu 8WmgVhhEt`VX85Y+NW^}ш|Qj-'VxdUf P]+ªz%AUH!3hٔFOvS)}GaHeDI, V0aF@f4zc(GGaYGSZY'QWZnd*z%맃A򦦅}PyjO8o6n5N+'wwݳE%iQxd ֨y.?+¸vll%؅2=mO1*ѼʧJh(EwvûzgmJ dGGPEf 0΃uaRUȦAw.X,l'ؓQ߮iW&?qZ;t=&Uכև spoT3&9"ވwE0)cpmpڰnGôT%0քWZf.4?y=f{o> euH(+,tG Hii}zt $ H ~ϟr}5`T6:_s`'#`GTr$zgQ \DցPF=͠Ġ :2hx- rY?{{m6.fUH[`mjA0@MKs&k!I HtLWB )8QV\結%vʜ&HW>UYOcj+!]R9ש,xdlb7C4Ye&zVT5!IҾol:ecPUq>TGܡTdq2Q돋VzncXGVT a*XXcza@*?}'YU͕֙fqP][o7+,aa,6 v/' f[H3\8~ɞLMþYGbwW}/UdJ4̰V!h֜Җ,ɄM01%YWYFZ?bK% ob26`Gîdgpc2$k} P# 1%D4 DjR+I! ԆzR<dSDq.`ŗ$Of/O$3pw1,1VFiHSLJ9X DZ+)5) cAneSĬsi_` vuNŽ]ciq RI # ʈ1p҄r[k0xY,&?'wޢ P^k_:H]ÎDQ$c @{+#hE݋DR`LaHFrđ5&jLR$SB咃$ z!O: F~~gulٴ%PZ\̗?>v- EN?Ͽ5Xdx7ٯ~n*/b&;]}q ͽrumMwb_ 4{ھ"Yc K3;%7 zg"M#c8CTȦ͜^:;y ^w0k0$(\̸=b B"xeH:li)w!i1o X{E+lu, lQB*iRxyM;P79i^xS3?jW"ֵØubt0d9/$vb}zlУYGrt`r*I?]};"@.dXV%UPU'M* U/sܨ)`4mu좟t։<-KnQQ WE/cN\<گKk'&p8QU^IR=dWkl2RW2_mx;<]OUh;FmO Ooyj=h "xWFms+y0|XW,|m4yU ;ym 2کX(TB'X۶GGѲn4]JS谆_Ŭ`K\ 5N5DTRT!Q}Xح2y!]4Z~8\,ˌD@N 3e dž05Ǝ¤b[Sf֙:a`g 3!m!*GJmHmh Yuq~Ѩ9bec3CU5f59#,h٫#Ϭ Fci5r4${sF1k$`M2ugYz<;:WO>Wkp%.\!Zk_o>N0kKn"> E45 N 'z}Y{$^YaI,sJT$hijkC1\@,VYvګ7$ L !bcLDQ3`RjlRZ2Jk I CjɋaHA9%;7:c|O$YghaYON~zdXer6Z4 bcr~F],]͗~߼'jgɵ~}Ʃ LQHM$7߇ l̄`tul;vKEz{+gU"{Ż~Ĺ2Kt{xpϓr~ݕ rɧÃ%nEOX>gf>S xJ6 HdG?` F^/}K%b*"$xo7}҇:Knh2MZ,3{<E%Dn_!iڪaHa-*(!&rSas_;HVP)0i(3)]v?{Wd-*a:w]hwIlHF44rnP~`J7x/&or\qy1C],G8HnJq#sCI/#ع-_swYX}_3OF:ǑAL LTyQ)9gS0ld8FL15ݟދfO 4J+:2sKl$n)5RD4"h <!SֹkJ]2hiF-2lV_W eC4Kx|2rv֣Oz^S+ 4uM(nQHjFI'Ҫ%Ny#gX=8&6\A ;'+|Ӄ0[M'[:_pj*$,c\6Wmú+U%$Xc bU@rQl :u\52{M]BWbQ Od9|mw$A:YpSĒ4eH'&ʽy,\]wu; aZp}{N"B*V[o1ʍL ĩ2SPT2UZZ)RHIb#+ʉhs)iX(R745:I'$B- Uhv#!+ƀ10JF Y ,BM`9HdSMt|# ?? A@xn$1c%'F,hfiQbp-uvHv( hL4HF(acmv0.1A0zҹ˨qg%!B/fإ˨%U"蒨Tc%?sQ}l̩SShL&p.n }0T X7/{Y?0iK3B7xqvߛ3+F7YGl"/|T]W[QH *ftXMבZt ֣AKI;=}~&IQ_0])1|5]dE6fcM9{dy?ңn}#IeyP"\.b9(I4Hy >T+X:rP^=n>u:[1D`i!ȂZYLFE~{z VK^^xf{m}U ܭ-TfgE%Ҳ]ˢAKeѐ.>ѭƗ/aKq`+'+֭xZ`Q,guvx(@#I +ۭWŮlP N+fP 0R @@cAA* W BJg=uvw݄)/5\ Y>jMK$CbjDi)␋,$uR 皨 !|)BdB`gb M/D#(A|eڸPס+b*10lnL\HIkOjϷ+n7Pd7萿 cȅBJ0RÞ(/*[&¡Qc&:+t>+ GFmg}s {͵_MlTұx*E=^-m(~QV${Z^|_,nf?DW~K:I"2 a`@K%H*lgi#]vjm;}F#Ԧ8-tN[ +]O0?^C,n}=\tCq77G 77a{ț"o$x)f åЛ|(Ţ9KyFHS3 aKRȮ)z6K/m yw1rFY{7Vχl:Ũ%<( 1?yy(>7f$> ڤ*ya%*ʬʣi50/gS伜;Gu4˩ǒwXZ0DL2xNǗUR42G!Y94ʌrvN*+^l8ȊnZe/{ Qj9)=rD򉒆^QZeO!nEpbTi[Gij1@,F{|b ~gJ_Riݟ*$l4\TD!Es DZg,ɘbHe0a&6` `AykysoXKERJ-izmGK"):+Y;ChJ* LSvXMcLJj5^_ӫ4|if3~eKMs>A%1!Bϊz=,f0R$""-AVD%>vOYpFPa')rrR`՞2i-Hrcr*Hqb+L,VڣmJV }[JUBmEWImA䛓# I-sUF6k4! !H\+:öNsYemw%*/vp [Qd=c !'&P¤"A)1poQl,⎭%8k6 ^=X[,8#>h"V1DEsTxF9l+Gc,5zg@a%(^a ح k.vA<!X\ pRz^k"&|Z=(:"|޽x큻ԕZu߽{x<08>|~BHAv~#xu _BDdg尗#oӏoO`Lg huW~/J_ޖCBd06ٷ-m@ix4>2@j~?]_ó"a!xC41t2 y{7=8uWK KCbH^qুQo_X1X@NY XGhn$Aʊ,|umWa*Cty # Ialg=w VaK1Haٚal+R8u(klceDBN]U*ױ*x'WZ Q)Xp_Z %IbL5")1i]95wN';o&XǫJ'NxƾTN@T]ccvsx EpLqmd dSq,Lk{O=fM, Ԕ.p_%{]I>}wVg 6xoɶ uY0Lta؝`vcT'b!Z{ e!DD_ܱT℗LVB!&Qb)xL:$sIck/Vx?$d(5Xݢ$n,kE4C^9T[*G̕O ;G.*@R:Q]AmvKd"X.L'쨻{v[,q]'ݙ܉hE54ոNDi4҉ӖcxjTbR sf厕G4nTcbJŞkSW8!ppSH b K&NDe=ZT|K^tl~Bs%:+oLCwEcӇ- eL{P!w`oNN?x.><8MybJKh8O`:Z9nˉViΤ8҉{&~zǠ&Z !ΚNZh̛F7;N}ĈJUu{EH&-wvCƋgW@G)EW;5Qk |1+ )ɠ>,yВ"Ie# JafsLJ縔DQs, h`LcڬfgaM=TR_75Ȃ9AL >ņX#IP-w\ u)]ԔKbr$vaB!D7ג+SK*b)!ʵ)=>LHZ- ɵꎚ Q0-X)ĆIA(=JIMI`EDI@A"xj8 lヿ>ZhhUT8‡"=Oo{sAHwfmaӸӬ L>~)'V2J_f 9P 3өM3T;]!Żח5Y*okQAp9I t/5RD˚p}@zP>}كmh9l%R"MFR'3ʥ(Ylq5^8iT:>Jf0Y~!C]B2p? H?Mࣅ@qətՑPpNjp*T~@$!4%w1ag)f1Af1llf$ L^o:*K}Mfzf:ŧJ:OOܴ ! }`|oMkQ Dy[忝~'oFe"q9_EN~ʢHSۛf6gxϞMe8yjxWJJ~ J_;ŭ3Fcwo"u\ Iiu77A- ]o.Y2V'cah.sŘg:lwz#U!#Faer|q1_O[7}TOδ(Fi|ݦ'6gTf́\,ljPٖ1V4?F2fR 4'(ǫ6^ 3T oqCV)C$PgI68Z΃ Q(hAӔHp:J_fy l'~ڋ(Qh/^v;d #0X%sñcPk!HILh8FyxK}l1"e`A݁`c 7h ̊1`I> F+!Zƈj?/nD>Kr#j2s͍PTC%Vzg7€)2[vYje ~C ]mw2'+Gl'b魞ե=cw+yUY{&Hьt_4|e+Q‘7( ȻZ\pq\Eʔ VgH1l$dLr#Kؼ5\u|$wXg:kMO?nl^#\P& `eJXF,bnpci5 X6'u^oQFD.X0C]P}'!eI#7{ӚmFnԦdCٻ8ndWY^x ػ IY &,+q~G#f՗ii:A[CvGXŏkJ;gZJGFBsEe:hXHR^|H <-鋏^s=PuXȰkm^AvVMG64k]6u7]hJlMDAҌPj\ d/iMigo*p'{Sz@@aP+z :0o ke0 \b"(c*'Xd*^ k,p_)vɉhS;lzZ^$VOm(}{7|99~e,6~9Hxtchծd]+9rͭ-X}zroʗo -V%yƀ'VJₚrj.#"Ye-ל卮QMSYr6ve\HbGF qKdDe,̽ G% J`NR bsc^@e_)Q6sr02Vɘtd<r*x޲NJ꭫UqN 2/j!d+#ՖKZΣQ2'5OΫ,2X!IHd-' *ېodiPA6z[a(bg1]6LÀz +3ɼ<řI[)*N6߼Zk%]3q7̈!V&zZkEh.t\ݱ$8mS69RT?zY~=}3Hka@M)GWA'}]_:9'כRS #?zvUOϠPۂ6kC3;ޟl$j=dKlsT;L=̅FIlu'/q_8%ͼ}W~pgTNSCPY;cTÍӞA)fx+I5&"uڽք{omˢ.]Ksr!AKP\u)w-c]\X̱.=O6 .20y2%Bf'E^E^+W\(G !rN}?^Q0N: Y +.vꊋϻЪ,t3fg)V_8[ ?\H,ш o|8Szqf# HpK^_jŘ2ćX %{9񍄳VY]Z]tP zT3::->ȵ<%#ۥ8|P)Kx0 K+{hdB13nE/̊0% fm/OڊZKmEc5B<2V^Sl ~,S;>tQ)BpOIndnNv":(>Pd tV mIS1ͦ@u ȨۈB\Z!Hy< Շc-Btw^[MIٙ߷ĀB1`΀kнA$ aR=_:Th!o2>}T2ojbi$Wj+݊)%qf*f.')x`Vݞi,y%$qDJ3!G}Tua%A ԧ Vno|[y,^>f nãV([1miBifflB)9<1L@SO'>fշ05. 4FOK=2y6cJ<}Ӵ'p,(_mVAp|΅[b㝘;o!^ܜoDxZL%(5S|X)%5}" M礥hmh=X;چuȭZg\AwN|*B6@"ɹ`s1cl+S)[RV <&7ٙV,䥈}s6x8 &\6y$ -FiLX,7,n/r9_s,}yc=1`C[G/߷4Z3çĻ\V^/*.?#y%MM6#dW(_99id[Qc)ӓsw}󗕥j/}nQ,9KjJ?^\PSY]ohl B!]vgp3k'QJ KtivKh`Ɋ$mW)ׁ@If|F@vj>*&:=Y)M" V(8;sb7z2kI5ő$i-ĄwJQ8Y2 +e=Y MiV[*vt~C;ݞ.b R,͹'I(8YM򏋏c#q(;l3wGǨr|,@0B“[mN4:%\v<[Z2V+VV,e1򊳺?O~ɏ緿]}G9k?_w~KO@7r]X~zp$wƷ}J7^¸w-3Q{f}xwPէ y1 ?k $ Ӛ8ԫrrXzSR7@{؄M!裒79LB ŽFp~a5)6q!1-]dHVY<_k:FWy;!YwYR<h99_F>t-3 ϕ$(lQ>WL&]AL3s9ۮP וr[As7D#S> !Ғqk 93z>8v@J²W_')Q iY.xCP9^+y//|N9فU17p{!rM^Ғb;yN@/ny_XmVώ >o5#W?Q].ju~2OϑyqkwߑE-E3*;fExcy.aza#s$׋1`j|zOW_(IT;Pze)ur3-WSN>w4@ a/4*qFz1[U]Ufvr?fu7l"JfUG+Aj>cUdd",*ȭ"/rL%Jot[ pg_PZ>* jMX.GՋh1C>?Mr c@QM+3Pr 2i SkZ+M c#r$ Yӂ=˘#`\$c4I7MKig«KF*ZHh9+vHARep*mV6WsR*%A0pV*S8hZ&irɌV,4!zf%xDa*r It>T P%ظ-ˎST˴k]"xq0Am31q61k2Ǝh qZQ# >TZmZ+ad ial 6B0#ɩMs)zhYLninܗXq=gF2/j<+Ns6ڴFI2''X)%T<kRmd{Qp%TI&{ƒ$宾VCb{ "F_mXEsvS=8z8\,tu]KUƻ-zm]`,}3Re2gr'σ,h\R azG\}C  a; Q%S䢹1WI!gN3ON63yOpiNPFh.-2HRT/<3 a@N6Gb +&<P+6\YTv4z3!Hkv1r S|`R"JH!Uevj]Vkt8`33 Bmp]q4@ծ<.ͮ$(Hz8*Zmjˇk h$nѧv r٫|Zqv=TXÁl {'Lntߛª.$ ) y33faaMC˸>TH9 B vѨ{XXO 2E_ {doC4,J&]`qʭ]!tIT④B+yiTJw@Xx(OBeL%%V!Q<)e֥# To{`oC=@JNw>nVBtrU35)bDT{ "bShdb+[`Oa)גwd!>zT )Pj9PE)3}Uu8P'MjB\yMad5"Y_9A!%%&L (\"E% ΃b0 t N@|]))UB)=sS' ޥ|(:1J b 9uP+SL8,ce'Q2)hC'#/KA0p2`$1,kF14"3qȂWij0f ;uG DDuFgz5S!iu IG!P<|UFWJ";|X35W 仢x :+ P2Iܬ+ӳ)RTG_dEhJéɞ(|goCQ(}9@1rC-TaAbF/Aۭ9!jRK'w(ІھC)eV]C!jжR5rtꏦL8W5s8JR:)wJѻ*JF/宗R>cT/i[K}6I%/+"Mq4]7Z𴏸D.h T+fIXB-$ t-d<ʲUik'zq+F!mNHYg{Nfk(f#9KBb3?6s~#R j%/B.FNtvy\ͦ+e<6hگqC;^I㣘j)f>%? aݸM_B1O~8 YvQoS^.dT͊EXH(vn ԰Ϩ,l[/Gі9YyEϬ9u?êg|d&by[dnr-H2~HiQv|zhň2cb`3~:rB>c%) D&.%X'$դk-ՐRuj Y[.V3]<9a 3 7otzr2 ͌e+XC @''g pVY~]/'UIu'+@>O~y ~m9$UL;Ю헋/ I3ޥ®?gՇ!vR֝ʼnZ4ع E{+h$?*5!k)w>8`'y6LN|Oua6Ǟ]T13i)O.n2#Wƕ֨n=T6"00w$iɤuNg,Wy'r>JŦ*W}v$`JɜC∂Dy{8PB}yMA3Nαc/1)_SÃl9M`Gp}R)1ߎ4lψO.9 a\,jL9Jª)𼢲039Lx Nʖv6:oDe[/Fiߗ5uxJ c\CxSf5%t/o׫ŽaCX0KY龻Tqù i5g0e~5zO.~:H4ĚzJ=ͧ]Kd|bڧR21fJ(Sj9)gL8LXMklPZ5&{_둜\rs+KUaٔ 잯 jX f_a%^5dp3> L ޑ)Ҁ4TܹRX7^p2XP[{?~W,Mq|)EG=afGi7#].a`Pm 1͛Vգԁi|PiF$hy5)K9hȁqyb7b0:`PmcEUgy'0E|*(w2Vq`"3PB"7f"nEsZ7b0XgBb0SmCO#7;.]V.uw^dw}S9-o._CvYl>7q] x/b :~_nW b > Oo~1t{˄^tnu4 ɔhHX~kr4Q֔-PrrsNy aE73{W}J(o WѽutH [PzelQ8W0H;=3 {H*hci\&@]HyCZS(T?g{KJ¢w_;G_N]"ma?n5JH^FBoPm)!{AʶvU~?yw$Roo+7iR^} 7~qW7zW%7j=GM/vppsɸ' ,?S't}GqʰՂ;M5TߢZ޿.ޣh/lwei>w?]J.}lL?\6Z/D/KwX?C堸/W=nsvwLki?r#b>nXvl$uӤW/Am2>M:*Q/*d?'ׁ5 `4B=ԱE'},nE>ܕRo0)N*ujU[UDa?^_PױpBJ/LRZVUz"|u(sL}ˁyi0n,Du_jlol.ꊣۇj+uT+f(9}!+Wrel8O՞Ww3=>1mMGz'W1mSM !m)yG|VMP,"G~oSko.o})7MW; ݥ|wK[:{MYHϩm g}3?B=`ǯTSU%i)i;w!\UV,7)/,v!s \(/ BXinX m2W츏p-[}-*'-)\%7Ai0rgguSqO8mٜWn#u>*Z9UyA-٧ Jx\d :rHfFTݢ8s AV'/g0"Vt?CV.J~ ˎ8TW漂8(u%՚(3 GEAզP()H˦)ܩǬ<JcDLz#I{&$4apD95d]3l@W#$}>%rk2J! Qp" (}ސeB @D a6bZp8f'_7$ $cr!3@1;r](8H(!g%On"Fq)X1pKd j^3Q2(֞eg]2Rb0JT^7$Q<3|U1 JR"lM]Co:Mj8`e|avu+V䈺ףּlhUuu!_B)p?mؕFӾ< )&U~8,j{ҟtf:+ru"n=S\1) |RBawua*e'X>M{Pj@̴` Lw2 f=k9} uiW5@ pbY}!OO"4?.vX]cE)v^i @_:V3ڸpPgu9Q5II11bQe 3ZTEkjz/ "NtF7$#dN$sLTAiLe!g?H.M]ξV}Jv3g?lC=yz0Y [8R8G`ta>ZuCƱs澷;]uc97h'9yF/YOT+&rXc:R׃&ɤgCRu%iQšT[ytG@'4&<»>*,wdLZ#,f!-v!E-yh1L93sv&\! 8g'Z?/ +N{,KUb_ޗ Qʦ_ rI>,CDZKDZۑ Z.8@?WxjT7 Ng!EF?/z1 tz)m ZV-?K^gwf9!R(aIlo4iVӬM[==WVFMA[`^8GO,4Xl*ʄ&ϩbEĶCxL_ި.pK3nV4T 0Խ*1.4wuj<ZyPKJҭ3g)wJs;`#ZS#ZefO^v*mHNG{4 Zc3)!@EhTEwlCo~YL CaƇEa}:E =\6\*k>-oLt/.u8/wH!hqQF$N.N}t}pVK^pV7+£x(r ;4H!dlvWٓ;\RQb\ռ(hxW^leT;k%ʂyZgW] L*^ F8]/O9dz5%2zR'p?y5 g|Wh;/!zGl4K—Hyɻ z2ITg I${5p؛;O:ymLȃw :F^l} ؿ\9k6x yƠNa6Z;kx).QE $'\^yrXUSJAκ]OKsy!X`FaRb:q2̉vQ:,U> I&e5!u!l\.sY6F)Q+*9=Prڹ#SsC2GG1&@>&Zؙh" I%"ۮi)?ҢBY)tâ_;yF!&x )){DZڐ@]39w#%o\׆uB$Yry "Hnx2#Pivfr<*?]GUkmgeMߍGzޝ_]ޭBG7kXQYQ9Ƙti6aܴwun)Ό';lOmi*U`PS&]b} zD>p\lf0͇aMo݇#υ UwcJFgN 0d 2!^/nU4DmGa[ ·*&>t޹}/g˚?- m `؂#8vmmoWA3v]3}.O59@h,^>;)ONbפ^rMfVIRפ/פ| n)$ #C57os̽K c9krO0) 'қOkZ<hsh6P=Q8E5 Ɂ4لLiIbf37YJ0Ydg%|avm \P.I*cA&L&1cĩc_p2764۩7ਓǨ}8m~BY=lX˧p~IR~o3)eOx̵i9\1e8OlɊ(qr̤1 L~r+u=T=fcBcU%ߤ}I+"3!_M7a@Ll놮7yac-@^BEALCHz~RI'hڬՐ`o@M C8ntسw6.V$ D؝:iߵ ÑW#!5}w4;FFH$?Poc$|*}~Hp`?hO#M%2xttwM\We*:Z4:ZSt G nf\w.nni3q\?ȿu{&!#.Û/g>!lU1G[Kw;(NśD-eY0R rO Wge"'uFQ!x1(iL] JB%ϐXTOvo!; J&^8}uC4>߶m 4j3vfr9fDQ<|XB%21zܞiX~c{{jnWs+2WɰHEx,2Q%. d (jG<k'۞sVxs>NT9 v?/H%\Km&"HtJQV$7? qB/d8]obB[_xP.xt5OD >b_2d5((H$ix\sG_daAYoowNeiX RqK<2A jbL/,̓&X-"L^ 7S/VidNSjq $QP#/G_§?C{b1}Bl X;AJ156$1Nn9Yi \p䏱_B;?៬ N.]x&ʳ[pC%{VuٜhiGsХ#xٖ&G%;g }Y8jUoM^бZ4z)c}cy PstǏꯎ?:JjNmv!kO)嘪F/n.z-e_o,{W|·oVؘ`,.lr,5 4N^ #Q0r Eƻ&8!2hPl-R &#K4ZI \wVM]BvM ek>BJ5=Vpxar N M\RF 66 ފ eP {pCjǠ~ 5׆hgaxI"(1nB/R"ybNa͡fk S/չ!w I:%F5uWYo+26.A~W@jk7wn٪9ܖۅF7D}}^ċufǛs\bqvzy/dži$ΪXk jRi7nqv*z6ñ#[5'w-wNL-#}P?0QXpr<(ׅG9UU"D"~払c 2Z rIΓEhYe'29yEu5Q1ޟLޟzYR):.жHBwE#d7|l[Î|Mo.@QΛ/RúXKCJ=8?K_?re]ݦ3׏+1I1Kg!nF<@= Y^5`Z1 z! g2m{epzS v!i0nnI73)‚}saXP)\-g8۔9Y5 E):"Ґ͡1l$ 3k~jP[<LV5-8 IEޭB ڰ 625Ceʁ j,Iz~iY.j׍ ӼavK:Ls;98v ph^}\wA+$6=:6Ї  EW*4Z"'OD+[ւ6|-l#~|Jϝ1LR~嗅a#4 kS8h!̼zeSqwXwCk[=7^x[ |›xõhd>O3 T"3X_mJLݦXxxDr$kuhIYJUĚM[#n6E?ct]'ۥD5}aNKc*pC4^KbNc+>p L3‚Q10UωYA%yzܔ#VHD664;?P.䅨1sA8vG$'@ޝ3իAqeq5(hq ^f (n6wZGjʍN-VD4:3v]ǩ(sFrvtFW޵u+bi`u Ї"6NۗW[,9;98 %_YZypt Jgde.LlQA2K3֒ ~biɋ4 h&ٔbxz22">֜9S#%O3j5jh+دs#³$LXgŭBΡ7ԥ:96H&z<Iln5yY=s 6zΆِ-?B8Mu D!'ݦ=[( Ԓ|rL!%mBo.dk Yz9iI k9 ( PMB},6սq+)iV2%%0tn%5[!7` +ac7:=!, +SKG(>`QVB-O` &Б?6sGnQ AUM;wCҢWu%>q q~1NM! :JR8Er)^,A]BT8VFFxA#JdN%VO>FtB9}I6S'E/ҁ]g^qvs9Y^qIƋXH.8U K(tRnl!]|֐|": UK:aiчtaR;N>(BlH)"qX =VӦa$IuŴѸ oÀk2& S#(͢7hNC8-z#S;N^V[O.e}Q<_}57pJ|'usX9m0z Cf b0Z/pO:fq5^E]֢E|g:ho< n&*]Ÿ7c9(/t93_\ HDf( M/K3[%/wq}N|5Z1dNlҞ |H#:SNQ ZBծBe{өol]5I9 H%Hx,^6V}˱مe.Ī$``=^g@/ u(${ygס;;<b jNz;ITt嬡ydIkɥBK2Gh ^:"p_"}wޘ߀1o?w=z`Qo(zk2\ny~;v1m[k`ōCaTi6~3jA.Ƀ֥Zݲ$ ǙL9ϣLnM=>PޣcZѢd5ن7png5oGz)%[1?}LX׀H?+I=g}/QJlh9k A楇A)6fhrĢ-[Tla3?yd]kX ezRzH{D6ҙxteWdc`wዎQZg'$ܙ-E+:L2YeU hjGdn1[#=BUm; z'M?jO;IlŸVj? :a?nPx̜iK'EL=]h~@~E_!A[`͇vT9l'sq4~Js'Ҡ'в̸Uj1r#:?ɬo,!p jj+a"SZzP3_LAU\ z=,= *v~OXl: sǵ1dbs ekE s96A᪑:#'"oDQ 4V;(9H)4dnvFSrsm8Tt @Gfך%ZLr)HgO)˳tГ_o1Zt${m(wj8:nK[3vͦ1{>qYI#{*Nmp!Yt͒|7Sk)Z W%lX /+ ^[7*n}2R&rřs(NW +Y9n @OclaP;uPPZqMI.49:E3ԑ0Weo(b6 4d)b^v\~3X=.qjnbK'gΈQЦ4g@`d s[d>X"Z%&}+6u+w~ J`X&Jy0, ̼WDrNo7dUm79å\[ѳYPDHh"P@@t kfn˂*ZpS 7-wnAHC'k Ij :*t/בo l^K|*Uۋ7(hK.nY=_i UVGd-<7 "ȹa::"HqFԹd]\0Nlʚ MԶ#;7@LL׆V[ >h96}8Oђ*4nh8\e 2}/fL>9@k?e0,s^'-./+- -%:F~.Ұ6F#Cv;cga*`)ϴ,Cf\@1*#p?1ZUӦ樐OV˙S';'{ֆX~:LƩݐ|]jRě A7ifl1Iu/)!; ojUA`Rpevz6H^0h1αYZ  w.]r>>ZMqvS-l.4a >X + ` AOYHp!=[-o0&VجZFG~WϽh8l>>ㇳg.nqmd|AؐQI *E$D<w> C(ՃY nBU]bRY+ʳllIӎK6J.}lg -g.E.mWy-ʱ!j+Y?Y6,,Q\jZZ|sm3.JѥXKWFy܉QQ_*ׅho6Lp1ُ9? )ߝmq&2;WaNwU:c`ΥdӍy~q8~ex'|7.Ugm 6)ψ P3峅J/Ulx2>Z5j|'͕PV8f2=׺~Oț ԂΜdo)L&gBt[^{|7TK@Hw;e(Ju~1"I{Dz'^8_Z՘߃0YOO/IMwBZ}) mO=nRi?yG} {ɼKkM<4rL] '(Ws4*Nj</oO~+dtqRBSe>ƚp 9+s<|{Fzq&& 'e%GN_T P2̣TQO {sr/Gum~5r-V/+WRf5v፴ۅo5ք&8͎}ru? kN^-,×x9ʏV/y|j{/R*>*[/G}WTLTv d7&B?{F_eq;B_Hf.d?fu<3+d~Ir`b׋*֋~Ϳү~kAeH@ס#CK5E)D0^ ˽Gv0Fte$ó56:`OE&`uapy[DյU}%HOelPbq/g7$wAnZpEX(NrҊc$(c3&_o^$Ǭf5:yq?:j-#O'1qR m9)}u5SqKiTO{=I'ԂnORzR"mc%'L r(:#I]eοky6vφadXl<+1.#((kPsCw \{@s 4b'4GVA{ݵ\[̶ 켜_KnmL^ $P"' 8ρ$0$9hoq\[!auxsv3eoT xl~ jezF;鰗o*𤰢=q夝8<̓Vz-r OrX8Ma}ĎV9G[=j/ō˩'0Wpd_>vWG̮*~Z|Z~edtU0ݍ׆㍑.Bg% U0ǙRH$^>0V Ma3qiBjYyi$W@W\(½ҚjC@p#>/2+҅1HD)Khk5RF {E)tu0SJP[{0p))fzcrɺZ2aX4+Yy ɬDn3E TXO\t"*]-9BfNe ofEPtp:D92:s:JZ1^4f[P}]̧Y0`C2)n_8dJ_ˏy'&R3 Ox?<|~&+7\ͪ%O}ѯAI_߽;znvDKv|";0Rmg=~l$&%'?}f==іk Ph0FK:|sC+FNa 뉉v%p}#q-wJ5<䡤t.IGNgX@F9 }jfv*Ad;;FXcdcQ;pxԃxY;fvޞ9N؈bΖFN^6IdLN'$áN'}*otٞߵ2R?Os8x\so'=^e0=]w6Fcna:g ,s%/=\",e-Ms`U^cj.u^jJJ`ѧx(@HݜA-#9Y! R"O'!v6~w U[wg7_}v5mNV*&6Ā о1( R3C` >9侎;~1Bf^1ԹLEEGM#tr]#4`D\vJTuwULqleJЍ5 "Ғq5Sr6~onz9}M,*$A +Y>j+TQk sL ed*rDN:̖srdI'CZ"qr̓g-' J'X?|'K׈I721DH˸Ɛ|Rٷtʑ-Yäɪq'$(z@n""%*8*ףem,_޹iRVz7߫I)9w{U,\ Ֆ]cGm4mXWdOOk&α6n]t[~n3iQ1r{8#xա`XoPk\;\DT=ަv3AA}Gv۟)C*UyZo )k`\F7X<=FG ,R8jȒ- Ş@dczTA*8`[&9Q)^rՕBM;\~ 魧\NR 9-MMcVؑm* yaҴj0}uD1nQw]o 9סug$(*P+ +7^ڼO)UZԡz$G,9*<7h;OySN7vfSrg O`%J#a,syWi*ɡwwMeQ!+\,ckq$8JFB g?h]K^k `#Ͽ(.\W@5@h_'AIft(j 蚱h([h.Z'kj{Kji{$RƤ.E𽰨K0|D~H(GOL>li6{pa@ H.25*K [[1ӥ˙3,x;Ң^f-%*uUJ*Kh6[Sj>Gx$/jK5N۞tMWVc[D j O$fDw*"^V#1byY eΌʱP"LXeֈLyJk#"(OQk֦wL5VNEyp&O)3`JS9L-NvF=[ebzǤ@2v59HFQVs-jq>[4Ӆ (&enY~6'zOdoND8ոLz86}F'IE tЃn_BZEGMa,s%FFV/8]V =$55vc‏xT-MwI+XԆk0( V,HMՠ[Iq%r^.3ō2&[q*w%?boIoMDi\vkA˜_֣B3+U&%H%)ι* M$+D\ehMɽ*N@ `c\K/U,C`~YnW⯱߯ m|r$*9GqǞj+,MhVx4K16[ë=hŎߏ [ܙ` @ CHynov|ͫ_eQ;<7aʦ(gPE >TW$ԍ—S`JJ ֡Hqe:_]#rVwmnݪiauHI[ ܺV)2 ePcua0>зxA8$`9jxzjKMG%s8/פm^+Zd'j71oERR~r7aQW.wT2Q,(b#bӱ x7N'K~eeTި&뤧bikw ē^JaCtzqP{h,^&:ɟ9Jy,uCYXEהھ W(vX @9g MQ/D":tF6ϸZ=%](WLɺfZynڳƋ=WzٹYO >QuGx$Tpon")|E 3|CRh46.8rjN+2k9H.aƧ_+>Db,RIII{wP+O gRvQU7>>}(?fpS8twM,G>f(Z6kΖvm=x||iLy>N7a V S1/=71JG&cQ Z1FJjqm] 6TD@>?dC.ewV;FA]"-vh6b^2Ei6Tb/'ZVJӑSsULb'hȻ륌ȊEďg0(FRKz' yZkdZ)' ~+2Q,nC! #$ 1;A}zgWIkY&(`\QJ21zRUDAt2X&Tz T%A Ʈ7rJhӍVj>ݽ℠`#>h}L\hc.CgHe%+zⵍ"Srް'gpON:{w)i aѦ7sF>+f?.%8j?J3uQx0{?{\~CeO-wW_} ICPi!Û-l&")vO?D)=>Ь߿BdɛL7 z9 ^Us&8`֢5f;aW~+μ5]^\ӝQB"6?L^9cYrsR_KUbCIM /WKU:z5&P}sOiUΠ(i6.›j"1Іy\wH(T֦}LVwQsZoםtjuq0@zXÍd ۫!<7LDCc$d<|wh;rjt}TJ+yX62Ah*i`xRSIFڪN~+Z[fa\M!jQP+33d $s3Y!lU7A+vG/)9o!M3jJ8hG+]kC$'U^R^yI)rRPo #x0m%rH+YD g:*y*G)xK5ʼTB}>5g :Te%Ηj Bex30No,e<w{KILwͳdH z {|T43s2Z0Fʐ9FvI*88Cv""8C2 nrݾj"JTU@ջ߾8(8zN NFʐ`@hPiiYWKx*# ;2TfsNX|WT@A`ZJVHf(/Z6kΖvm=xނPgtŇ sQ~ h T$S}LM27]SqMojy>nߔF%}|q !klurBtr[A3E0cj7b9aNsСm`L8 (k/ZQdQ6BTl# H,JlR8BفvmI/Ndѓ/^9o'^:yK PGw_:0uFyǚ:6Ԁ2&弥g|<|Bt&eSJzjTdV܈38;/@} 5gbKYJifZoznc{ىtYSR0 A1<m4XGn%=XtxIjUObR8U0h?j6RZv\['4XliTj u09/@} P],9Z*6sWdO9R' 1rxW-Y璔jJOBJzjT\ OJNH ?()˻ER~wsCudN(,M9VM8PX}"2 =r|f 9!qbʱ7s.܃(G@{t@Bx J5Ty 5UM9XIgunvrٲRq3R||e4/[^շy,gAtIFvyFfKM;n|(@Ge/抔cwX q8pF/Q pn+ $'hg޿3ɕ>n],v dۺ=o}t̗쫇fvhwo雵qk3mPr*b9^}DI7k#r4%oԜÅʔTT`è2d;(%$bz#Ǎ_1楊q 9geC7/?Eu{VlRj㶚*UWVJ+õGD ( j (+,RJLM=wܪB%?O.:wICVRAe .V(͋`ɢl[:(@dDn!EQXħre`r(S$<ңE*G! %C(C٫yux:xppKc֡"Li`HUen8#B.#CVVT&0ҹPzW;&tTwJRYfb wJcM<\Jׁk{&x fVL*с8=^2Xi1YjAE[K FU7jh1Z-W q%#ȕ'WdS; 8AH*'rt3ћXes8 ]"wnQI92V^;)F&- fƙ"6ّk>iiFM^wKqnb-[Iqh>z^XvT÷=ڡy$3'L'u.@??}|Uf7W#1ÀL3 #G%7ЍSXqĀh_.[*%s̛0fZӯjdhSmwE/78MvOKo.alf;_#2,2( liTE*E:.nfZ;+!oꋒ I9b[P(E^ ͸ᕲBXnr,RhQh,søʜ:DX|N׽7dYg4K)íԵ% $FmZ4G!͍O3p 8\\mۗ|#N5=ϼ*" -V7dʏw.Cv@gg'+.<EwZEKT_\M1BK&MIp6,bƑv!DXM^hun\cwn髫ZXwh sZꗺ|,qaNzzYM }vh,%,$FL?h! Ф}4ဲJ#ΌWp),=R( }3R6fPmݼ6F\BVIJ+p rRg3X.r,O2kUQ2eU٬ $9b4|6yQP1LΊ)JY#նqzN./6Ӏi?WwYppw zw aF}5 kWoo(IϪfK3` $=$=L ,VQv>4$ ќk30w#rfSm+jV.=R@̗ߗkDzK?y& ٢]qQ_R|?b 4LV8/ϟo.CSlB&RۇFZk~K#r.![ sV)բO௭FxvmADH[95G@+Q=sknI!ۨl 1f@2WsP]Uʂs H>AJFFܛBra :؅K+A:@uX"@QƃHw5#[h)#'gYI+BAQ/%I[É%I[]j>u[Rh.Q#H-۝*im!Hx% ]'xT̊`꧀!-+`1Oͥh.Fs7F3yiF貲T4>7}w͔Sb4P5wdbE0XLꋒbR_[v5`K@_C5U[k-p\y+PH)3 }w'+2c%{:?=9;֯>k/ lU F{#J$29Z#8L8}8uy*(`TmK"Z[6*]Wr.ngŇlS{z{֝;{tQ-z<(\7xj9yrxId; Ϙ|xw jfE{5@-fވR_fK(0 f KCQFi٘WX-4A(,./rkp 1a*4+qX1(.Lx<%*\8\; 8#p!ws'(Ίb`้(!*pG°MwϞ}kO?s!U_Pledc6pL[;{Aԁ%00Nr {eN&j T.]-zK2eG_N&ЕL g^qcC|$ CL4RbJ4V ^I5qQX^-)bEsDg"='Xݾ'Z`fzq x ۩Y'x-Eqt$87zzYMP%WĻ(weOJ#NGvc`?)`}e YLǝQ"P eb8jx2GW u$SԧPmM8e4W'Xb2KoIXG^|¬rP NC؟jx"eZBޖ\{KAX;b?М&94^7sM)֍$7*I}GuۣZ7bDZ[:`uBC޹&{[֍ >uKAꤾ#ƺ1C;d-|uBC޹t]aVR/UQUrN+ ڬLզzl~xmXj+~bѰZmQ q҃RX+}y2rZ[TkuRtw; 5 -}6V4`ˏZzZ~)V ]x' ܔǕ`񆢭Տå7rC TXfi/JQY`F~glI#eQo/m$&}g]A.;>_BH+3#D&(,sQ |%YǗ(U7ſ 5!fLEjwUd ~br^YN2+S: +ehXV* 2agm-F5UFbmq [[5Zc AZ*m$д,j)4,tsZi~e9Z͵X/0'Xjw90@x$k,;#1ECD浊vu "Wݺ4Tؠ:I^h%3g(?ފV%z" 1ϯc5cR#{ l.71q2ì"vƁ] ǢSexKz]6B^קK]qz2n(tyl BJ;la8+RR?<ݰ4~/r4&D/`Tk<0ӸS z8sE Q3 RƥP'],1yQ1l|ҳ[<w¢(I2w-09\J)d]N_?ǭ"`Tw.'+-,[1 \@Lc˷CFy;:yΠ'j˅fH%(64% 9p}?#*2ҐH3-Х+ 4MJ3PWs +.kQhF9͋i.`{<~~wC{FMOxid$C L~{LA X_/;r1l1rutch9;*t']= YW #^VX|uVrZ9Ou>4as<=vn|ITmNk ~47J]+t'bDhQ2Bs}=oNV:=f}8Y#4i^F}*E[ڡY-oe?L >x"9r^<@+wrX-̌@@.Dr[ z!Hv=J$j u0k@׃|COӗSZ0h ѿSڢ(Zon(Ɉ9D=B"&x|q<8+ddkAG~?%o+F<->&lz1qsı@|v"b8Zd8tv#N+QN$0,'8srޛj}zS3Z1!$DΙO{:?(=u/]W{cf h\v~Y~zYM@cf|\ԋ-mG 3o;bG/s5v:yU4A;o˺)gaRe:n{ 6uK/!nCh;W$vim{n2QwX=hN ̺W}Z!4䝫hgmQpin[$u];A.^r$#YəhQzxxxN3e~nt6cmFq{+${8(qMѣ#B8vR[:K-3 jlvr=OUv]/SD3tL|Cjшs/Xw_Ր#'&+:sZ$Y9yQGN]".qGoE-@wNԁ.gD Ҁn`jg GWm$<^' A)gTkS'2ZZ<2U R efuٌk>d@YͿ_ڽA+)zӦDKSc,JDwѮzصm@#-nR 2P!NnsQ.k,z iNt+Fd?D&!93ѭx H*?{8W5UJKe8M4V{Dվl弮DMGwQ71Eԡ'yӉ7q d}f"G4h&tFԣMYYj#9mpNpcH+pn`@, 58 H+VtGfD)) Ythij9ID%y4(PFPhͽpQi; (mV&L"'b CT G+J:±wI.H):^@w"f ,^T*pnA8* aT ume<܌Ro_É3RFD:7vC54-ǚ&H5mϮEk 3Q̢rMul6 zjr<=gte^iǹK}nBc?ϛKcΞ.>g~M^oQ^ |^%JyՖlAYn8->oDZȧi(_4`:c zxӵ#9!:^$k2ZFKvkcEY dAhFM`7KetԊVٹP8PȪs.P82y3HW8`,"%>ng(zƹ$,QhAj2:r1I&pw.&Wi+A'6mҖL0k#"jw/r{}{~ʀ?gȊЪ}.~XLJwwfJa-ӷ/> L_^ܘb]ztB$ijLh-XDM"O?ǥ^KzjF(.)T?/EE!]2afk۝TQLF4fpmp-3֔܂[hJh)G-]oIΨhM =xfJw"fg,^eUfJù)K0LI/O,)e 8qp<-H]丘qZJ2Zf ^X{4&fuUWI_%|bevٛ,"2TfL2]%_HdmWY)-YW[ĦY3>>ƐcJf,tx/n k2V_/^LX4Z=RnAN^ʑ~y(Շq䎩~w짔W1u枥U5cRw>Pˮ-I{ֈTYafRf687nښz2@e %BEuWKOu\: "(RP &߂<]q^6\:2OTP:~Ryr͢2m9*cRzR*2O|ќlR*2Oj:t2Ku)BI1Nwܻ6*p ww#2]u8Ԍ ~|Jnz SӫQ/EqNzH"9?ig(BT F˅Fjh%( ;^n^E+W=u]{,&Q;Gxĩ_pM'ۅji &g!jTX͑ɼiI Dz^H4 [(W ͨ(ds3!dP{5:%FfP+s&^Mp8B "k!#m6SC)B$POS6ɹ-G,,*B|T4p&heG8v;9 .+N2ӫ2<ո*z8r eY8r2Q7KIeq{ +2T@| FX  _1ȡfp2KYͿ_~v(8r"`1(ā,ľTf2ˈf"M t_6 >tO:K3+qk>TO&J&UY<:p/~V&Er8K'Xc@ZUpù#; }h=MՅ<^ /#k AX@qCk_yډ-EKvF *UiLjrrss!}^ct`~AxIJzqi3%'X犠@\9+2NH8dvzPge:UW0T?kH8B"brTJg$~>E;XɁp1HL4m9ΰ^0ڃhj֜R;-DKt?_![Sʯp_^ &" Ggyk Eex)5s,I)tÂIo/hF`?afe3)lfy'GMMm R>)_>ԁHqYPÄ.TդfJPGiA(oDm%ACEi;X~!19T5 &nƙ&Rbs@bìp@JxiIvrƢ+E׃JgO(˛dc4j>[O8G &pU7'L8BkDށjf`-#|qGSm$FC1S$-{-؃GLH:tSW[Ī)ϘZ8e=/5=bzQS 6toUs:;u2jOC{ ;|{Z5:'kb:^nw5Pwn0*-hݪv/k7vj)w;*r(zԫ`YQt!q(pnjlxjlbhW+/'Q]O+ŪedowiK4~{kʜk?w^V? OV4Hζ\wk~Ouo7|1%򔒌)MaYAT~a&N'ȷef h/R|s&[;ڭ/ rD;JxжZOrnCHȟ\DT%T$ϴf#j4(nGgFAl4?:S !!rݒDO3$L,R~nto6CŮ Avu-tdv1#*GM+&$;( |~dy{~d9+G2g y@!'sﶡDGLvS9*42Cyf׼jg)=E))fDh%U2Q^:n?,u ݇xZU|\\I-*(VFwmH*{Yn/{0H6dΉ33ٙcKrҒ-THr 2R"Y<2V 6˵d7by^Ƃ+5Jj,M,T-Rov|xV=qrB&dR0U!YZjB\V3^UT!ـZ (p`n#zyR-ѡ@jh6[Rի4VYZjd}!?U[(2~q9cl{_=mST kM@atQ: hb(U@qb {]h\%lp>H;tpJsֺOw{9\<  e.tDt>}+/pfB3Ӵ!g75"ل)hvSp{%ypHeOƺEOɞ[L>tߋ]Dع/ZoeGǺʞO<_R Y(C {H_ 9+WPb6㌛*yGvucnc;d+z6O1qey_,%=!*%uƨ e!r+!g539-+%h0AOe2p@P5Bdw`m9"賴#fcm25+߂dHiIrGƃG+]0RYJ1LGB`Zc\}sp)Pj[a=L@ _hl0BYш^.)T#'c $k݀IcZ#Pͳ_u:f$凓(\VⱔZ KYx^u}5#fyin_C+pV)0 |} XX<{Ğ)iDAX] FW2H3^֖,-2F$vBe ﳔ?gPVI)$V9Fy O-j$Gg$ R"Mvn"^hk%O1iz(Tac$?~`p* [Z83[a. 卽=/a.w%1`(%j&-[aۥ!&VáڧgZq9 >A39uYPQqˡ[39v9ٽ yv*P5'ASW3>:U ZUqiФFΎavhRgeAѝ`'$}Y#bWP@ryP^X[jdLi&[7<ɖ*%C)'q$+-].b /sHw@tQOwR+i.lV-|s FB9ݺ7MnsFFZd4뻻O7Ji {8t{Z; NptHs>jzr3:hT zZ8JXǃ8nIO#=T PB#xC/R>za}hK;Kˢ}U絯_% `jʣOfsT|5H"o*bl85S*{>|3zcyy{ss?4xx1߸ȗQ>j@SjNs9ECmv6ٻ,yݓMSOY)r&􋋏 0q7$ޒy,vg[l/0, HzĿ~?YoBoU.Nwz2Зԝ뜔|>iyUW]|^s M0֥Y VxaBԁ%l 6kmJ իtm酷o_N:/zjMhº`Ӛ0C.~yy|f|``€ެ[-JZqt ݻTYcuYaOB%߲ZEY`Ft\H"41+ ܎#,4".w7ake*LiYYU 0;cF,.[=.*_Elʣqe_*`_dUQ^9zVgD nK9*TJkGqֳB`Tfpr*0Xߦ$ bH 9gr *ox Ly:RۖZ0w}Mw{$Kn[jD{a<(ՅQ H 3S~x{YW*a%MTg%nc}Aj" T&B"͉»Su@ , H39-7NnG䝒oѧ<p\p σ=O"U{|sIOvㅸK*8iv"=E'pn!%;9jwVæ-Laa!N 眯P 3pE +bff#!QIzezyyp'eKᙘOEy`k>|6)n^==cj,<EX }O$'uC"8"šU]{AK5 8dASLfɨ^uzJ7C6^\%EoZ51;3L!/!Xe%r}%߰Y 5FiS!R 0Q!+-IX1Xs1{ͺ;Ox锽uD͝d95Vj`w4z#(^c8*@4V;هfAc NAOTgch%^p<# D(Yy^WMU4@GVmE*M: TX2j=HA7iV??wbzM^. N6XՠM .oK-4[_qo7Kd#Q1t<} C/w|O"Dc\=})wp)L7H`\Fhn7.Y8>bALcc\XBv:A4/a,wQwgsnuwVOToX}_ou!Ⱦ"~dָ-z{J֒dkp^'%,8Ԛ5ږj5%,&A}C_~p#+9Nɜu/OBxo8Eky6-m>0p P‡6vf՚刜9Ff[A+}l1l 8g}d[Yh[!R7[Zh<,s@C:ߦYٺ9=w;j!Agu{NYFn)zxg{5|Jϑzxvÿ]n|ZQmK(=(%pIyZj}Q @h5ֽq_opw }8'ͭ/ܞWJ+uxVʚUWiA F[8JZb7]F)&.-7+󞪡zƲN!Oqnnn*jZelڤ0EPJ=7<77cx>ɗcJ/v nאG4ySLM:9}FYWdT''X!|d:.97K8Z`i@I\(f%CILIUI[6I™I(/I^ytY̠  qxnyb[&R[0ۡMll%K女.@ZJ$)4j4i36bs;ЈV8:_) ,e Bt}}R%`y#j~}ݑ\BYP0 09+Նn }]\\pimJQjë<4zA鬃TVt-@~^VIr(ݠVrZ )`+}F^G ҮCB'X i`l΀bDNeӃyhAEj%dZVJ*,RÓҘB%1iwXl v&b6 (PO uR1&t^2mOy \La=P&dZ.oyJ$hE8*D$b?B%Y&5ɅT#oޯ @fw=crpsݼ_ ^8|zw_|y'|^1y`nO1:E85 x?q*NlwXp9cm cNQ?{ƍ_amHG}WuT+[li%js٭Ґ"%3|H25 nk^qnxo%Z[)9h;V &ěx# e˴[ gJ :lr. { p,^ pdɍyTxR ӄkBFes (%N{<2l+Ma%3밄FWBlF$/QL 7`6N(hd4( (&VYfܜj-E|CnN- /Scˑ*$fEˆ3)zoY.0_lzMg}D,r=r)R,K\f` RQ%8x^7KMKqZP4ܘ"1t!r:/7o\ Au gl Ɖ29z,: 8 m, [eT9Or)Yamy+xO,B7rq| 7NBvjA聄Bey-"krYaeh9 RvPX$LzS4&F7؅hwJJ^Ef !x2hf{J? V< ;N?f+HkDc0x!|Y_W9R-eD&PHG(:"`ӀIjA59g)8[#&o#Q fg{C>?Lo\vgndS7r̟jߚ0?Z<fiٟ~o?_3 qWWџ_l.nnsp^O ? jB"(Fvhz0`4g8L(UT)YJRd%$#gTK.`LeUt̽N . 뉌p*$M!q1&ڠd=mKKL%ll1$!Q:"Q,Y,h()<̦Jnԝ\U"w`0(bY0isCܿYf<ۚۓ7{ioOvjǴ.nIق1HL\qb,51* ia^DΘ֯"""<T!rS R-?svQv薔ei6 oː' J n~M1|U6UAjhG+ !:ӻ}!VlxG@Y?f 䎯 0zaZ_RȐ(ΚO)p)puD^Xe:cJ  FӦ$ъ1RN9=W&$ܬ4H4{rUi {ㅶ;CgGS{Mgt8Ifɪ!gMs խAk'$G\MgY"奭ZӸ̾AW#&HZ[1ȅ0SZrۜL p ͊o јw VOBCdG>jRNպXr ^ (6(wM!l[}(ަ(0ؚJFz1/GCk&:eCa(.Qv]/އR쯃GIr;cH зh[m,zxb^0TvL T0꧊1~H[䄝}mM˾fO쫂r#+F^>w.zוhۛ8{9wIv#*9yQsSy,  " h^:N|# <;:465-C-HHvPnm!QݜMMn%Hϴo6|7 W(l.fݡw*,,%-m/QRzQϠ( ڤND>€>jjU&˸ݝnhRrRjbRkL`LbDOi )^i 8d!=9gwWSsrI}CVeAfm83df׉G%{ /ıM8NoW7ƛ.V'4%0?zq}M;[uʜYޯY=w)kpSi5 HOv!w){^D< ˿ޟ@e~ Ts]N-SjIŠ,[p{)l}hWWiMBDpHȄQC߀hAI8a͕Pnw}x'~R/RaB)U.HXjȇ5O$ZH+uH1A;ʺh}b"W&_j' Au *Ԇv[/zo-RYJ:)} wPKՉŝ%J,S$9 vR* j0JeKi/unv0GbN"W;[NVN#lպ!o5/]gfvnGʠ:^uvX4VCh@GɭUiCFsf$Aϵ\"%P^v,AAcX28oxǢQF*vaghMk f+bƆQ/Eṟ i锩 85)4cNC},ç3Bq!:KAu:xb!8a0k}P/J&;7U}rJj%Ĥ QN, HG8+o,6z4 sH@⎆ZlfrWMCCEl&s:pʺTLqiw 5iOXc=G% ڮsJ~I}.}AFƵ&j"39'eg`m᛹;IO4$ɦi'\ϓ1 Va8_XiqXq[gޘoenu3\H2CBd L%ZՆR ^/2HًZn"g)92/XjAT40\+kC? @U]'#k8if^kh8܄d¹s5aBQ:rRK7|G@)i##d_?k3F㧳7vC2ݭRMHیh`DBW&~(]N`ˊV&?Dw]ڋ!^xN9$vW6 ,iT;qَo< 0kǂ){z>Ռ:#k6ftҍ`?3DiJ ao)C$ϵezy ߥApM!ݩ*OjG :xz9teE i"tRG!Tڝ|C rFs,^XnB寙-VG,!^e.cpY6opwA,2(TZ ʹm8%ޅ,W$W"GFc-2E1Csz1g:_lF@藿ڌV[_Mu Zh!$xYg)oxvY+QkML\UJRׄDX,Q, $il1V8x`6l$藣փ-ĠQ`T$"AKBNc+1sG$nA0r8qK"AOU0e! I0 E|"]LdT \~•pNURUzd:I5\VzV8Vk'!}T_4Nf+=m+pRh8v}T_4_eĸ4vQXHKEJOJEb[r uA;+5>LIz=Wԅ* ȕ ՠs&[(ɜՔQMILOEj<ciVqBRҬ)6[i[)4+P)xtMylhis+I͏JAYiI59;i+X):LNCfQKjE-`Uw֋!w!ZU3 Aɀ E\`BIʸ-8v(\Lix*>tPW!C&Y!%\g&xAVIOl7^ܬ1s]${{D&S:#`މ`*Y7t H _Jzk3=*URč_Οx|ͼfrW6?kPI#6]yуX)+'i7`E]C 1bIQ;a)Q{a8{|+kksKg$ilQorQ8JmsK Σ0Z#dY3c)BK4&CwJvUgH}89~?{~ox%B өì$mZ<}P"x!aRu II\MT 3o#-u#E-fSG~O"t2/ vҫ]8NCTla>>3V8l?6USq]00{SA@/2:˺ˆf %$),XfG9EKV9Ju.g4/(%)[ckSj/sby(]0X<# PZ3yrɜD, ]%E {OK 땱$'}1D1X^GMՌqȅJtUXH&2ZI !V"@^j-ͦA aB,XTT2$lւ[,!\0\O4N(A#DWoel~t~\ߙا߷6v2Tal^UUJ?>mrè_?L#AW+}L|{oYF?u\monB8 =ۿ2LbD-njOy[" Dp}j)^-9Jv]I0'د{MFϾT V>if@H|T"˽q 9[=da)WeD,Vȕf`h.6k瑇۫ܬ&!d6'SsB3`CҀv_]*;%ݐ_b츳-up&e%k).a3/4wa{)3wSa+]w=pM=bϷ! *9AX@5[X)b,:TJډII+J+-dCln8ћi+&i8)k hHJnG̜ɺd(Eh"Xt@ Ⱥ ngӱ'6bMw5{b_8~X;m1"%]\wo.zzr->O iTrVc56㝺l>7 "A, 6kɃ6-Z DN*T3I2P *f\hДCW"֊z7FobPKZq9pi_7}$?BL-KZ؍ i֮2**UV–@JJ4hOF 1RTdϽ/X,-ja/Q57O4>h.iRo!z3mEDvN !Tc#IgE!& vN Ej:rw!&C74׸&= QJ*#:'],m8*>$ kqO?Z ~?Fao0');4ʷ ncOfݯa|@nu/ f "Ad]h9 VL* NiCKp,\庆Mj'+˔ [D,,9PD ^aAr+R!1ؠ&Qq=/PfS2 BV{Kj)AHcuPjArLj{#L.8sf a ojA4Sodmt';5V$K4S9nc Ll>1ActZ=||cH]o,ҋL .J58o)_ym{5,NLA3w;=%?֙K"Dk]N҈KΪ" ?s<'rUƝuზ׻'2.˸/b/:QZJE,/W*Cr/(DN>W.$J(#??9'/=?:@ .ʯj]eWEsu}( VЧ,9eo5$9 ][o+FZT9@X}&n&s$';,YNl6phi.3̽e@K9]O.蛅ccE15{>`_G1)2E\*fY:fvϒ*X]n QXr,i_ ZٝeǬUΞec؟,;cD:vyEiu|"VozA(;nP!my{khOܿ1XsITfefW[9Ό0Yu1Yѳq71YZMB%#B2V_ r$ \ҀqM0frYG*P(䑂A՝ǀ挀澗g4v}ѐ>yXxba`~&d [z$69VB$,$_eL'/$7]<%zP)cPgXrIw'Ih+is3] ,lxN"PG.(];z+3(](]KDQcf],1 |cY"g<Γ$D!Q543YfCNտp%a;1`%Z`j[aY ZNu˫1 8ᓒ' Bm :xr:I÷sUcW״a3i=jqM; M o^z䮇G\5'sz3ښV.phyC/aEhltu qK4~GX \Z )`RpG5L}>W̙L&K'RΘjB[\S gȩLH"PN&``N uz&$Twa`=yv8X[ѝ;P̿dZtbSK^R瑋Ù0-;bW.礤rLT*Hv(T՞w f'uu9M6@>d(#k2 ~?CqxA:LktcS ΖHp?&S2'Kb2>rXQ.) Y\Ƀc&z^ױ.ed$ `"b+&ZM=-F?c :s X192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 22 11:48:04 crc kubenswrapper[5120]: I0122 11:48:04.088460 5120 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:52822->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 22 11:48:04 crc kubenswrapper[5120]: I0122 11:48:04.088572 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:52820->192.168.126.11:17697: read: connection reset by peer" Jan 22 11:48:04 crc kubenswrapper[5120]: I0122 11:48:04.088656 5120 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:52822->192.168.126.11:17697: read: connection reset by peer" Jan 22 11:48:04 crc kubenswrapper[5120]: I0122 11:48:04.089018 5120 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 22 11:48:04 crc kubenswrapper[5120]: I0122 11:48:04.089086 5120 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 22 11:48:04 crc kubenswrapper[5120]: E0122 11:48:04.089616 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d0b2211cd7aa1 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:49.581945505 +0000 UTC m=+4.325893846,LastTimestamp:2026-01-22 11:47:49.581945505 +0000 UTC m=+4.325893846,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:04 crc kubenswrapper[5120]: E0122 11:48:04.093973 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d0b2211dc7c94 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:49.582929044 +0000 UTC m=+4.326877385,LastTimestamp:2026-01-22 11:47:49.582929044 +0000 UTC m=+4.326877385,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:04 crc kubenswrapper[5120]: E0122 11:48:04.099071 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d0b222113b4cf openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container: etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:49.838206159 +0000 UTC m=+4.582154510,LastTimestamp:2026-01-22 11:47:49.838206159 +0000 UTC m=+4.582154510,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:04 crc kubenswrapper[5120]: E0122 11:48:04.102562 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d0b2221f9391b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:49.853247771 +0000 UTC m=+4.597196122,LastTimestamp:2026-01-22 11:47:49.853247771 +0000 UTC m=+4.597196122,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:04 crc kubenswrapper[5120]: E0122 11:48:04.108404 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Jan 22 11:48:04 crc kubenswrapper[5120]: &Event{ObjectMeta:{kube-controller-manager-crc.188d0b2425302a1a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 22 11:48:04 crc kubenswrapper[5120]: body: Jan 22 11:48:04 crc kubenswrapper[5120]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:58.49711465 +0000 UTC m=+13.241063001,LastTimestamp:2026-01-22 11:47:58.49711465 +0000 UTC m=+13.241063001,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 22 11:48:04 crc kubenswrapper[5120]: > Jan 22 11:48:04 crc kubenswrapper[5120]: E0122 11:48:04.112215 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188d0b242531b5e4 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:58.497215972 +0000 UTC m=+13.241164333,LastTimestamp:2026-01-22 11:47:58.497215972 +0000 UTC m=+13.241164333,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:04 crc kubenswrapper[5120]: E0122 11:48:04.116596 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 22 11:48:04 crc kubenswrapper[5120]: &Event{ObjectMeta:{kube-apiserver-crc.188d0b242aa49c98 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Jan 22 11:48:04 crc kubenswrapper[5120]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 22 11:48:04 crc kubenswrapper[5120]: Jan 22 11:48:04 crc kubenswrapper[5120]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:58.588632216 +0000 UTC m=+13.332580557,LastTimestamp:2026-01-22 11:47:58.588632216 +0000 UTC m=+13.332580557,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 22 11:48:04 crc kubenswrapper[5120]: > Jan 22 11:48:04 crc kubenswrapper[5120]: E0122 11:48:04.120059 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d0b242aa552e7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:58.588678887 +0000 UTC m=+13.332627228,LastTimestamp:2026-01-22 11:47:58.588678887 +0000 UTC m=+13.332627228,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:04 crc kubenswrapper[5120]: E0122 11:48:04.124230 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d0b242aa49c98\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 22 11:48:04 crc kubenswrapper[5120]: &Event{ObjectMeta:{kube-apiserver-crc.188d0b242aa49c98 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Jan 22 11:48:04 crc kubenswrapper[5120]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 22 11:48:04 crc kubenswrapper[5120]: Jan 22 11:48:04 crc kubenswrapper[5120]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:58.588632216 +0000 UTC m=+13.332580557,LastTimestamp:2026-01-22 11:47:58.598695578 +0000 UTC m=+13.342643949,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 22 11:48:04 crc kubenswrapper[5120]: > Jan 22 11:48:04 crc kubenswrapper[5120]: E0122 11:48:04.129840 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d0b242aa552e7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d0b242aa552e7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:58.588678887 +0000 UTC m=+13.332627228,LastTimestamp:2026-01-22 11:47:58.598760579 +0000 UTC m=+13.342708960,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:04 crc kubenswrapper[5120]: E0122 11:48:04.135011 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 22 11:48:04 crc kubenswrapper[5120]: &Event{ObjectMeta:{kube-apiserver-crc.188d0b2572763d10 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Liveness probe error: Get "https://192.168.126.11:17697/healthz": read tcp 192.168.126.11:52820->192.168.126.11:17697: read: connection reset by peer Jan 22 11:48:04 crc kubenswrapper[5120]: body: Jan 22 11:48:04 crc kubenswrapper[5120]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:48:04.088519952 +0000 UTC m=+18.832468303,LastTimestamp:2026-01-22 11:48:04.088519952 +0000 UTC m=+18.832468303,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 22 11:48:04 crc kubenswrapper[5120]: > Jan 22 11:48:04 crc kubenswrapper[5120]: E0122 11:48:04.141846 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d0b257277916c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:52820->192.168.126.11:17697: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:48:04.088607084 +0000 UTC m=+18.832555435,LastTimestamp:2026-01-22 11:48:04.088607084 +0000 UTC m=+18.832555435,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:04 crc kubenswrapper[5120]: E0122 11:48:04.152566 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 22 11:48:04 crc kubenswrapper[5120]: &Event{ObjectMeta:{kube-apiserver-crc.188d0b257277a256 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": read tcp 192.168.126.11:52822->192.168.126.11:17697: read: connection reset by peer Jan 22 11:48:04 crc kubenswrapper[5120]: body: Jan 22 11:48:04 crc kubenswrapper[5120]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:48:04.088611414 +0000 UTC m=+18.832559755,LastTimestamp:2026-01-22 11:48:04.088611414 +0000 UTC m=+18.832559755,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 22 11:48:04 crc kubenswrapper[5120]: > Jan 22 11:48:04 crc kubenswrapper[5120]: E0122 11:48:04.156729 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d0b257278cb38 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:52822->192.168.126.11:17697: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:48:04.088687416 +0000 UTC m=+18.832635757,LastTimestamp:2026-01-22 11:48:04.088687416 +0000 UTC m=+18.832635757,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:04 crc kubenswrapper[5120]: E0122 11:48:04.160774 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 22 11:48:04 crc kubenswrapper[5120]: &Event{ObjectMeta:{kube-apiserver-crc.188d0b25727e7a8f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": dial tcp 192.168.126.11:17697: connect: connection refused Jan 22 11:48:04 crc kubenswrapper[5120]: body: Jan 22 11:48:04 crc kubenswrapper[5120]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:48:04.089059983 +0000 UTC m=+18.833008324,LastTimestamp:2026-01-22 11:48:04.089059983 +0000 UTC m=+18.833008324,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 22 11:48:04 crc kubenswrapper[5120]: > Jan 22 11:48:04 crc kubenswrapper[5120]: E0122 11:48:04.165630 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d0b25727f89bd openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:48:04.089129405 +0000 UTC m=+18.833077746,LastTimestamp:2026-01-22 11:48:04.089129405 +0000 UTC m=+18.833077746,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:04 crc kubenswrapper[5120]: I0122 11:48:04.493288 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:04 crc kubenswrapper[5120]: I0122 11:48:04.713594 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Jan 22 11:48:04 crc kubenswrapper[5120]: I0122 11:48:04.716079 5120 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="b5b0652ae23f85601c29c923290c2f9697de7cdb60bc871e65a366f54e67be88" exitCode=255 Jan 22 11:48:04 crc kubenswrapper[5120]: I0122 11:48:04.716245 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"b5b0652ae23f85601c29c923290c2f9697de7cdb60bc871e65a366f54e67be88"} Jan 22 11:48:04 crc kubenswrapper[5120]: I0122 11:48:04.716635 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:48:04 crc kubenswrapper[5120]: I0122 11:48:04.724143 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:48:04 crc kubenswrapper[5120]: I0122 11:48:04.724310 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:48:04 crc kubenswrapper[5120]: I0122 11:48:04.724584 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:48:04 crc kubenswrapper[5120]: E0122 11:48:04.725468 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 11:48:04 crc kubenswrapper[5120]: I0122 11:48:04.725893 5120 scope.go:117] "RemoveContainer" containerID="b5b0652ae23f85601c29c923290c2f9697de7cdb60bc871e65a366f54e67be88" Jan 22 11:48:04 crc kubenswrapper[5120]: E0122 11:48:04.757254 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d0b21b61b22f8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d0b21b61b22f8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:48.043531 +0000 UTC m=+2.787479341,LastTimestamp:2026-01-22 11:48:04.727664564 +0000 UTC m=+19.471612905,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:05 crc kubenswrapper[5120]: E0122 11:48:05.229800 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d0b21c2628526\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d0b21c2628526 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:48.249535782 +0000 UTC m=+2.993484123,LastTimestamp:2026-01-22 11:48:05.224327746 +0000 UTC m=+19.968276087,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:05 crc kubenswrapper[5120]: E0122 11:48:05.242737 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d0b21c2e176d4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d0b21c2e176d4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:48.257855188 +0000 UTC m=+3.001803529,LastTimestamp:2026-01-22 11:48:05.236185438 +0000 UTC m=+19.980133779,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:05 crc kubenswrapper[5120]: I0122 11:48:05.492238 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:05 crc kubenswrapper[5120]: I0122 11:48:05.501929 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 11:48:05 crc kubenswrapper[5120]: I0122 11:48:05.502157 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:48:05 crc kubenswrapper[5120]: I0122 11:48:05.503315 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:48:05 crc kubenswrapper[5120]: I0122 11:48:05.503365 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:48:05 crc kubenswrapper[5120]: I0122 11:48:05.503381 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:48:05 crc kubenswrapper[5120]: E0122 11:48:05.503733 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 11:48:05 crc kubenswrapper[5120]: I0122 11:48:05.506121 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 11:48:05 crc kubenswrapper[5120]: I0122 11:48:05.507154 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 11:48:05 crc kubenswrapper[5120]: E0122 11:48:05.615081 5120 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 22 11:48:05 crc kubenswrapper[5120]: I0122 11:48:05.720999 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Jan 22 11:48:05 crc kubenswrapper[5120]: I0122 11:48:05.723481 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"ebbae86fffc27cf71b33437f1449edab3f60609cf1c8191d9e9295da2d9c9092"} Jan 22 11:48:05 crc kubenswrapper[5120]: I0122 11:48:05.723626 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:48:05 crc kubenswrapper[5120]: I0122 11:48:05.723760 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:48:05 crc kubenswrapper[5120]: I0122 11:48:05.724358 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:48:05 crc kubenswrapper[5120]: I0122 11:48:05.724397 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:48:05 crc kubenswrapper[5120]: I0122 11:48:05.724407 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:48:05 crc kubenswrapper[5120]: E0122 11:48:05.724729 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 11:48:05 crc kubenswrapper[5120]: I0122 11:48:05.725061 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:48:05 crc kubenswrapper[5120]: I0122 11:48:05.725244 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:48:05 crc kubenswrapper[5120]: I0122 11:48:05.725596 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:48:05 crc kubenswrapper[5120]: E0122 11:48:05.726266 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 11:48:06 crc kubenswrapper[5120]: I0122 11:48:06.080649 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 22 11:48:06 crc kubenswrapper[5120]: I0122 11:48:06.080923 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:48:06 crc kubenswrapper[5120]: I0122 11:48:06.081703 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:48:06 crc kubenswrapper[5120]: I0122 11:48:06.081742 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:48:06 crc kubenswrapper[5120]: I0122 11:48:06.081753 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:48:06 crc kubenswrapper[5120]: E0122 11:48:06.082172 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 11:48:06 crc kubenswrapper[5120]: I0122 11:48:06.114150 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 22 11:48:06 crc kubenswrapper[5120]: I0122 11:48:06.494351 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:06 crc kubenswrapper[5120]: I0122 11:48:06.728622 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 22 11:48:06 crc kubenswrapper[5120]: I0122 11:48:06.730049 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Jan 22 11:48:06 crc kubenswrapper[5120]: I0122 11:48:06.732753 5120 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="ebbae86fffc27cf71b33437f1449edab3f60609cf1c8191d9e9295da2d9c9092" exitCode=255 Jan 22 11:48:06 crc kubenswrapper[5120]: I0122 11:48:06.732850 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"ebbae86fffc27cf71b33437f1449edab3f60609cf1c8191d9e9295da2d9c9092"} Jan 22 11:48:06 crc kubenswrapper[5120]: I0122 11:48:06.732943 5120 scope.go:117] "RemoveContainer" containerID="b5b0652ae23f85601c29c923290c2f9697de7cdb60bc871e65a366f54e67be88" Jan 22 11:48:06 crc kubenswrapper[5120]: I0122 11:48:06.733035 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:48:06 crc kubenswrapper[5120]: I0122 11:48:06.733507 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:48:06 crc kubenswrapper[5120]: I0122 11:48:06.733544 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:48:06 crc kubenswrapper[5120]: I0122 11:48:06.733556 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:48:06 crc kubenswrapper[5120]: I0122 11:48:06.733514 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:48:06 crc kubenswrapper[5120]: I0122 11:48:06.733510 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:48:06 crc kubenswrapper[5120]: E0122 11:48:06.734154 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 11:48:06 crc kubenswrapper[5120]: I0122 11:48:06.735355 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:48:06 crc kubenswrapper[5120]: I0122 11:48:06.735371 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:48:06 crc kubenswrapper[5120]: I0122 11:48:06.735392 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:48:06 crc kubenswrapper[5120]: I0122 11:48:06.735409 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:48:06 crc kubenswrapper[5120]: I0122 11:48:06.735431 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:48:06 crc kubenswrapper[5120]: I0122 11:48:06.735464 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:48:06 crc kubenswrapper[5120]: E0122 11:48:06.736282 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 11:48:06 crc kubenswrapper[5120]: E0122 11:48:06.737239 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 11:48:06 crc kubenswrapper[5120]: I0122 11:48:06.737785 5120 scope.go:117] "RemoveContainer" containerID="ebbae86fffc27cf71b33437f1449edab3f60609cf1c8191d9e9295da2d9c9092" Jan 22 11:48:06 crc kubenswrapper[5120]: E0122 11:48:06.738317 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 22 11:48:06 crc kubenswrapper[5120]: E0122 11:48:06.743677 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d0b261065b15d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:48:06.738235741 +0000 UTC m=+21.482184122,LastTimestamp:2026-01-22 11:48:06.738235741 +0000 UTC m=+21.482184122,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:06 crc kubenswrapper[5120]: I0122 11:48:06.792597 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:48:06 crc kubenswrapper[5120]: I0122 11:48:06.793695 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:48:06 crc kubenswrapper[5120]: I0122 11:48:06.793751 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:48:06 crc kubenswrapper[5120]: I0122 11:48:06.793771 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:48:06 crc kubenswrapper[5120]: I0122 11:48:06.793810 5120 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 22 11:48:06 crc kubenswrapper[5120]: E0122 11:48:06.802221 5120 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 22 11:48:06 crc kubenswrapper[5120]: E0122 11:48:06.947015 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 22 11:48:07 crc kubenswrapper[5120]: I0122 11:48:07.494026 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:07 crc kubenswrapper[5120]: I0122 11:48:07.738278 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 22 11:48:08 crc kubenswrapper[5120]: E0122 11:48:08.120128 5120 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 22 11:48:08 crc kubenswrapper[5120]: I0122 11:48:08.493472 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:08 crc kubenswrapper[5120]: E0122 11:48:08.710649 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 22 11:48:09 crc kubenswrapper[5120]: E0122 11:48:09.030525 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 22 11:48:09 crc kubenswrapper[5120]: I0122 11:48:09.494319 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:10 crc kubenswrapper[5120]: I0122 11:48:10.496548 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:11 crc kubenswrapper[5120]: I0122 11:48:11.497042 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:11 crc kubenswrapper[5120]: I0122 11:48:11.726051 5120 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:48:11 crc kubenswrapper[5120]: I0122 11:48:11.726333 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:48:11 crc kubenswrapper[5120]: I0122 11:48:11.727230 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:48:11 crc kubenswrapper[5120]: I0122 11:48:11.727271 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:48:11 crc kubenswrapper[5120]: I0122 11:48:11.727284 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:48:11 crc kubenswrapper[5120]: E0122 11:48:11.727647 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 11:48:11 crc kubenswrapper[5120]: I0122 11:48:11.727910 5120 scope.go:117] "RemoveContainer" containerID="ebbae86fffc27cf71b33437f1449edab3f60609cf1c8191d9e9295da2d9c9092" Jan 22 11:48:11 crc kubenswrapper[5120]: E0122 11:48:11.728115 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 22 11:48:11 crc kubenswrapper[5120]: E0122 11:48:11.736432 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d0b261065b15d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d0b261065b15d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:48:06.738235741 +0000 UTC m=+21.482184122,LastTimestamp:2026-01-22 11:48:11.728084697 +0000 UTC m=+26.472033038,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:12 crc kubenswrapper[5120]: I0122 11:48:12.498429 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:13 crc kubenswrapper[5120]: I0122 11:48:13.202499 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:48:13 crc kubenswrapper[5120]: I0122 11:48:13.203602 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:48:13 crc kubenswrapper[5120]: I0122 11:48:13.203702 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:48:13 crc kubenswrapper[5120]: I0122 11:48:13.203730 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:48:13 crc kubenswrapper[5120]: I0122 11:48:13.203783 5120 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 22 11:48:13 crc kubenswrapper[5120]: E0122 11:48:13.219823 5120 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 22 11:48:13 crc kubenswrapper[5120]: E0122 11:48:13.289862 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 22 11:48:13 crc kubenswrapper[5120]: I0122 11:48:13.498027 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:14 crc kubenswrapper[5120]: E0122 11:48:14.435744 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 22 11:48:14 crc kubenswrapper[5120]: I0122 11:48:14.494540 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:15 crc kubenswrapper[5120]: E0122 11:48:15.127877 5120 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 22 11:48:15 crc kubenswrapper[5120]: I0122 11:48:15.498383 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:15 crc kubenswrapper[5120]: E0122 11:48:15.615769 5120 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 22 11:48:15 crc kubenswrapper[5120]: I0122 11:48:15.724321 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:48:15 crc kubenswrapper[5120]: I0122 11:48:15.724765 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:48:15 crc kubenswrapper[5120]: I0122 11:48:15.725693 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:48:15 crc kubenswrapper[5120]: I0122 11:48:15.725813 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:48:15 crc kubenswrapper[5120]: I0122 11:48:15.725880 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:48:15 crc kubenswrapper[5120]: E0122 11:48:15.726375 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 11:48:15 crc kubenswrapper[5120]: I0122 11:48:15.726747 5120 scope.go:117] "RemoveContainer" containerID="ebbae86fffc27cf71b33437f1449edab3f60609cf1c8191d9e9295da2d9c9092" Jan 22 11:48:15 crc kubenswrapper[5120]: E0122 11:48:15.727028 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 22 11:48:15 crc kubenswrapper[5120]: E0122 11:48:15.735097 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d0b261065b15d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d0b261065b15d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:48:06.738235741 +0000 UTC m=+21.482184122,LastTimestamp:2026-01-22 11:48:15.726996993 +0000 UTC m=+30.470945334,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:16 crc kubenswrapper[5120]: I0122 11:48:16.495156 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:16 crc kubenswrapper[5120]: E0122 11:48:16.618611 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 22 11:48:16 crc kubenswrapper[5120]: E0122 11:48:16.758536 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 22 11:48:17 crc kubenswrapper[5120]: I0122 11:48:17.495950 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:18 crc kubenswrapper[5120]: I0122 11:48:18.494788 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:19 crc kubenswrapper[5120]: I0122 11:48:19.493514 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:20 crc kubenswrapper[5120]: I0122 11:48:20.220982 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:48:20 crc kubenswrapper[5120]: I0122 11:48:20.222622 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:48:20 crc kubenswrapper[5120]: I0122 11:48:20.222679 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:48:20 crc kubenswrapper[5120]: I0122 11:48:20.222697 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:48:20 crc kubenswrapper[5120]: I0122 11:48:20.222736 5120 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 22 11:48:20 crc kubenswrapper[5120]: E0122 11:48:20.238289 5120 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 22 11:48:20 crc kubenswrapper[5120]: I0122 11:48:20.496798 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:21 crc kubenswrapper[5120]: I0122 11:48:21.499310 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:22 crc kubenswrapper[5120]: E0122 11:48:22.133638 5120 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 22 11:48:22 crc kubenswrapper[5120]: I0122 11:48:22.494456 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:23 crc kubenswrapper[5120]: I0122 11:48:23.494056 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:24 crc kubenswrapper[5120]: I0122 11:48:24.495548 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:25 crc kubenswrapper[5120]: I0122 11:48:25.494322 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:25 crc kubenswrapper[5120]: E0122 11:48:25.616478 5120 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 22 11:48:26 crc kubenswrapper[5120]: I0122 11:48:26.493830 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:26 crc kubenswrapper[5120]: I0122 11:48:26.571424 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:48:26 crc kubenswrapper[5120]: I0122 11:48:26.572249 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:48:26 crc kubenswrapper[5120]: I0122 11:48:26.572298 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:48:26 crc kubenswrapper[5120]: I0122 11:48:26.572312 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:48:26 crc kubenswrapper[5120]: E0122 11:48:26.572617 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 11:48:26 crc kubenswrapper[5120]: I0122 11:48:26.572882 5120 scope.go:117] "RemoveContainer" containerID="ebbae86fffc27cf71b33437f1449edab3f60609cf1c8191d9e9295da2d9c9092" Jan 22 11:48:26 crc kubenswrapper[5120]: E0122 11:48:26.578682 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d0b21b61b22f8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d0b21b61b22f8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:48.043531 +0000 UTC m=+2.787479341,LastTimestamp:2026-01-22 11:48:26.57380984 +0000 UTC m=+41.317758181,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:26 crc kubenswrapper[5120]: E0122 11:48:26.773903 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d0b21c2628526\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d0b21c2628526 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:48.249535782 +0000 UTC m=+2.993484123,LastTimestamp:2026-01-22 11:48:26.768086515 +0000 UTC m=+41.512034846,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:26 crc kubenswrapper[5120]: I0122 11:48:26.794434 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 22 11:48:26 crc kubenswrapper[5120]: I0122 11:48:26.796724 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"42b2d68814b3d5e68556995825f0318ea28c47c2cd43a1dff298d7157752167b"} Jan 22 11:48:26 crc kubenswrapper[5120]: I0122 11:48:26.797009 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:48:26 crc kubenswrapper[5120]: E0122 11:48:26.797428 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d0b21c2e176d4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d0b21c2e176d4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:48.257855188 +0000 UTC m=+3.001803529,LastTimestamp:2026-01-22 11:48:26.790226214 +0000 UTC m=+41.534174555,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:26 crc kubenswrapper[5120]: I0122 11:48:26.798084 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:48:26 crc kubenswrapper[5120]: I0122 11:48:26.798119 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:48:26 crc kubenswrapper[5120]: I0122 11:48:26.798129 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:48:26 crc kubenswrapper[5120]: E0122 11:48:26.798444 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 11:48:27 crc kubenswrapper[5120]: I0122 11:48:27.239008 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:48:27 crc kubenswrapper[5120]: I0122 11:48:27.239840 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:48:27 crc kubenswrapper[5120]: I0122 11:48:27.239879 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:48:27 crc kubenswrapper[5120]: I0122 11:48:27.239892 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:48:27 crc kubenswrapper[5120]: I0122 11:48:27.239912 5120 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 22 11:48:27 crc kubenswrapper[5120]: E0122 11:48:27.247112 5120 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 22 11:48:27 crc kubenswrapper[5120]: I0122 11:48:27.495693 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:28 crc kubenswrapper[5120]: I0122 11:48:28.495792 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:28 crc kubenswrapper[5120]: I0122 11:48:28.804755 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 22 11:48:28 crc kubenswrapper[5120]: I0122 11:48:28.805569 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 22 11:48:28 crc kubenswrapper[5120]: I0122 11:48:28.809087 5120 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="42b2d68814b3d5e68556995825f0318ea28c47c2cd43a1dff298d7157752167b" exitCode=255 Jan 22 11:48:28 crc kubenswrapper[5120]: I0122 11:48:28.809200 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"42b2d68814b3d5e68556995825f0318ea28c47c2cd43a1dff298d7157752167b"} Jan 22 11:48:28 crc kubenswrapper[5120]: I0122 11:48:28.809287 5120 scope.go:117] "RemoveContainer" containerID="ebbae86fffc27cf71b33437f1449edab3f60609cf1c8191d9e9295da2d9c9092" Jan 22 11:48:28 crc kubenswrapper[5120]: I0122 11:48:28.809655 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:48:28 crc kubenswrapper[5120]: I0122 11:48:28.810928 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:48:28 crc kubenswrapper[5120]: I0122 11:48:28.811061 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:48:28 crc kubenswrapper[5120]: I0122 11:48:28.811092 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:48:28 crc kubenswrapper[5120]: E0122 11:48:28.812097 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 11:48:28 crc kubenswrapper[5120]: I0122 11:48:28.812672 5120 scope.go:117] "RemoveContainer" containerID="42b2d68814b3d5e68556995825f0318ea28c47c2cd43a1dff298d7157752167b" Jan 22 11:48:28 crc kubenswrapper[5120]: E0122 11:48:28.813120 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 22 11:48:28 crc kubenswrapper[5120]: E0122 11:48:28.822414 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d0b261065b15d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d0b261065b15d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:48:06.738235741 +0000 UTC m=+21.482184122,LastTimestamp:2026-01-22 11:48:28.813046112 +0000 UTC m=+43.556994483,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:29 crc kubenswrapper[5120]: E0122 11:48:29.143146 5120 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 22 11:48:29 crc kubenswrapper[5120]: E0122 11:48:29.235921 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 22 11:48:29 crc kubenswrapper[5120]: I0122 11:48:29.498214 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:29 crc kubenswrapper[5120]: I0122 11:48:29.814369 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 22 11:48:30 crc kubenswrapper[5120]: I0122 11:48:30.498656 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:31 crc kubenswrapper[5120]: I0122 11:48:31.495705 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:31 crc kubenswrapper[5120]: I0122 11:48:31.726148 5120 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:48:31 crc kubenswrapper[5120]: I0122 11:48:31.726609 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:48:31 crc kubenswrapper[5120]: I0122 11:48:31.728760 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:48:31 crc kubenswrapper[5120]: I0122 11:48:31.728820 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:48:31 crc kubenswrapper[5120]: I0122 11:48:31.728832 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:48:31 crc kubenswrapper[5120]: E0122 11:48:31.729227 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 11:48:31 crc kubenswrapper[5120]: I0122 11:48:31.729501 5120 scope.go:117] "RemoveContainer" containerID="42b2d68814b3d5e68556995825f0318ea28c47c2cd43a1dff298d7157752167b" Jan 22 11:48:31 crc kubenswrapper[5120]: E0122 11:48:31.729689 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 22 11:48:31 crc kubenswrapper[5120]: E0122 11:48:31.736102 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d0b261065b15d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d0b261065b15d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:48:06.738235741 +0000 UTC m=+21.482184122,LastTimestamp:2026-01-22 11:48:31.729660931 +0000 UTC m=+46.473609272,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:32 crc kubenswrapper[5120]: E0122 11:48:32.054502 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 22 11:48:32 crc kubenswrapper[5120]: I0122 11:48:32.495562 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:33 crc kubenswrapper[5120]: E0122 11:48:33.457727 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 22 11:48:33 crc kubenswrapper[5120]: I0122 11:48:33.490984 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:34 crc kubenswrapper[5120]: I0122 11:48:34.247672 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:48:34 crc kubenswrapper[5120]: I0122 11:48:34.251201 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:48:34 crc kubenswrapper[5120]: I0122 11:48:34.251413 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:48:34 crc kubenswrapper[5120]: I0122 11:48:34.251457 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:48:34 crc kubenswrapper[5120]: I0122 11:48:34.251499 5120 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 22 11:48:34 crc kubenswrapper[5120]: E0122 11:48:34.268174 5120 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 22 11:48:34 crc kubenswrapper[5120]: I0122 11:48:34.495244 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:35 crc kubenswrapper[5120]: I0122 11:48:35.494036 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:35 crc kubenswrapper[5120]: E0122 11:48:35.616942 5120 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 22 11:48:36 crc kubenswrapper[5120]: E0122 11:48:36.147711 5120 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 22 11:48:36 crc kubenswrapper[5120]: I0122 11:48:36.494819 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:36 crc kubenswrapper[5120]: I0122 11:48:36.797639 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:48:36 crc kubenswrapper[5120]: I0122 11:48:36.798065 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:48:36 crc kubenswrapper[5120]: I0122 11:48:36.799572 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:48:36 crc kubenswrapper[5120]: I0122 11:48:36.799651 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:48:36 crc kubenswrapper[5120]: I0122 11:48:36.799681 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:48:36 crc kubenswrapper[5120]: E0122 11:48:36.800476 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 11:48:36 crc kubenswrapper[5120]: I0122 11:48:36.800920 5120 scope.go:117] "RemoveContainer" containerID="42b2d68814b3d5e68556995825f0318ea28c47c2cd43a1dff298d7157752167b" Jan 22 11:48:36 crc kubenswrapper[5120]: E0122 11:48:36.801390 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 22 11:48:36 crc kubenswrapper[5120]: E0122 11:48:36.808843 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d0b261065b15d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d0b261065b15d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:48:06.738235741 +0000 UTC m=+21.482184122,LastTimestamp:2026-01-22 11:48:36.801323832 +0000 UTC m=+51.545272213,Count:6,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:37 crc kubenswrapper[5120]: I0122 11:48:37.493618 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:38 crc kubenswrapper[5120]: I0122 11:48:38.495546 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:39 crc kubenswrapper[5120]: I0122 11:48:39.494564 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:40 crc kubenswrapper[5120]: I0122 11:48:40.495807 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:40 crc kubenswrapper[5120]: I0122 11:48:40.744086 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 11:48:40 crc kubenswrapper[5120]: I0122 11:48:40.744681 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:48:40 crc kubenswrapper[5120]: I0122 11:48:40.745792 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:48:40 crc kubenswrapper[5120]: I0122 11:48:40.745934 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:48:40 crc kubenswrapper[5120]: I0122 11:48:40.746104 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:48:40 crc kubenswrapper[5120]: E0122 11:48:40.746492 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 11:48:41 crc kubenswrapper[5120]: I0122 11:48:41.268344 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:48:41 crc kubenswrapper[5120]: I0122 11:48:41.269439 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:48:41 crc kubenswrapper[5120]: I0122 11:48:41.269493 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:48:41 crc kubenswrapper[5120]: I0122 11:48:41.269508 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:48:41 crc kubenswrapper[5120]: I0122 11:48:41.269537 5120 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 22 11:48:41 crc kubenswrapper[5120]: E0122 11:48:41.278648 5120 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 22 11:48:41 crc kubenswrapper[5120]: I0122 11:48:41.493345 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:41 crc kubenswrapper[5120]: E0122 11:48:41.965935 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 22 11:48:42 crc kubenswrapper[5120]: I0122 11:48:42.494018 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:43 crc kubenswrapper[5120]: E0122 11:48:43.152097 5120 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 22 11:48:43 crc kubenswrapper[5120]: I0122 11:48:43.495019 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:44 crc kubenswrapper[5120]: I0122 11:48:44.495085 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:45 crc kubenswrapper[5120]: I0122 11:48:45.496348 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:45 crc kubenswrapper[5120]: E0122 11:48:45.617975 5120 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 22 11:48:46 crc kubenswrapper[5120]: I0122 11:48:46.493897 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:47 crc kubenswrapper[5120]: I0122 11:48:47.494307 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:48 crc kubenswrapper[5120]: I0122 11:48:48.279611 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:48:48 crc kubenswrapper[5120]: I0122 11:48:48.281213 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:48:48 crc kubenswrapper[5120]: I0122 11:48:48.281318 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:48:48 crc kubenswrapper[5120]: I0122 11:48:48.281349 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:48:48 crc kubenswrapper[5120]: I0122 11:48:48.281408 5120 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 22 11:48:48 crc kubenswrapper[5120]: E0122 11:48:48.298630 5120 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 22 11:48:48 crc kubenswrapper[5120]: I0122 11:48:48.496555 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:49 crc kubenswrapper[5120]: I0122 11:48:49.494064 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:49 crc kubenswrapper[5120]: I0122 11:48:49.571706 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:48:49 crc kubenswrapper[5120]: I0122 11:48:49.572262 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:48:49 crc kubenswrapper[5120]: I0122 11:48:49.572479 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:48:49 crc kubenswrapper[5120]: I0122 11:48:49.572503 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:48:49 crc kubenswrapper[5120]: I0122 11:48:49.572512 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:48:49 crc kubenswrapper[5120]: E0122 11:48:49.572763 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 11:48:49 crc kubenswrapper[5120]: I0122 11:48:49.573108 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:48:49 crc kubenswrapper[5120]: I0122 11:48:49.573148 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:48:49 crc kubenswrapper[5120]: I0122 11:48:49.573161 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:48:49 crc kubenswrapper[5120]: E0122 11:48:49.573523 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 11:48:49 crc kubenswrapper[5120]: I0122 11:48:49.573788 5120 scope.go:117] "RemoveContainer" containerID="42b2d68814b3d5e68556995825f0318ea28c47c2cd43a1dff298d7157752167b" Jan 22 11:48:49 crc kubenswrapper[5120]: E0122 11:48:49.580615 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d0b21b61b22f8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d0b21b61b22f8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:48.043531 +0000 UTC m=+2.787479341,LastTimestamp:2026-01-22 11:48:49.575127869 +0000 UTC m=+64.319076210,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:49 crc kubenswrapper[5120]: I0122 11:48:49.870237 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 22 11:48:49 crc kubenswrapper[5120]: I0122 11:48:49.872175 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"99b634350c36056ac94a43bb1050fb0a41c21441966a10fdfe3aeae30cfd0c2f"} Jan 22 11:48:49 crc kubenswrapper[5120]: I0122 11:48:49.872420 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:48:49 crc kubenswrapper[5120]: I0122 11:48:49.873096 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:48:49 crc kubenswrapper[5120]: I0122 11:48:49.873184 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:48:49 crc kubenswrapper[5120]: I0122 11:48:49.873205 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:48:49 crc kubenswrapper[5120]: E0122 11:48:49.873852 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 11:48:50 crc kubenswrapper[5120]: E0122 11:48:50.158438 5120 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 22 11:48:50 crc kubenswrapper[5120]: I0122 11:48:50.495010 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:50 crc kubenswrapper[5120]: I0122 11:48:50.662058 5120 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-dp5b7" Jan 22 11:48:50 crc kubenswrapper[5120]: I0122 11:48:50.667433 5120 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-dp5b7" Jan 22 11:48:50 crc kubenswrapper[5120]: I0122 11:48:50.708136 5120 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 22 11:48:51 crc kubenswrapper[5120]: I0122 11:48:51.401837 5120 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 22 11:48:51 crc kubenswrapper[5120]: I0122 11:48:51.669012 5120 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2026-02-21 11:43:50 +0000 UTC" deadline="2026-02-14 08:50:26.033044572 +0000 UTC" Jan 22 11:48:51 crc kubenswrapper[5120]: I0122 11:48:51.669101 5120 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="549h1m34.363948101s" Jan 22 11:48:52 crc kubenswrapper[5120]: I0122 11:48:52.881529 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 22 11:48:52 crc kubenswrapper[5120]: I0122 11:48:52.882008 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 22 11:48:52 crc kubenswrapper[5120]: I0122 11:48:52.883423 5120 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="99b634350c36056ac94a43bb1050fb0a41c21441966a10fdfe3aeae30cfd0c2f" exitCode=255 Jan 22 11:48:52 crc kubenswrapper[5120]: I0122 11:48:52.883471 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"99b634350c36056ac94a43bb1050fb0a41c21441966a10fdfe3aeae30cfd0c2f"} Jan 22 11:48:52 crc kubenswrapper[5120]: I0122 11:48:52.883525 5120 scope.go:117] "RemoveContainer" containerID="42b2d68814b3d5e68556995825f0318ea28c47c2cd43a1dff298d7157752167b" Jan 22 11:48:52 crc kubenswrapper[5120]: I0122 11:48:52.883707 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:48:52 crc kubenswrapper[5120]: I0122 11:48:52.884240 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:48:52 crc kubenswrapper[5120]: I0122 11:48:52.884276 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:48:52 crc kubenswrapper[5120]: I0122 11:48:52.884287 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:48:52 crc kubenswrapper[5120]: E0122 11:48:52.884723 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 11:48:52 crc kubenswrapper[5120]: I0122 11:48:52.885080 5120 scope.go:117] "RemoveContainer" containerID="99b634350c36056ac94a43bb1050fb0a41c21441966a10fdfe3aeae30cfd0c2f" Jan 22 11:48:52 crc kubenswrapper[5120]: E0122 11:48:52.885323 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 22 11:48:53 crc kubenswrapper[5120]: I0122 11:48:53.888191 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 22 11:48:55 crc kubenswrapper[5120]: I0122 11:48:55.299706 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:48:55 crc kubenswrapper[5120]: I0122 11:48:55.300698 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:48:55 crc kubenswrapper[5120]: I0122 11:48:55.300793 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:48:55 crc kubenswrapper[5120]: I0122 11:48:55.300808 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:48:55 crc kubenswrapper[5120]: I0122 11:48:55.300992 5120 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 22 11:48:55 crc kubenswrapper[5120]: I0122 11:48:55.310927 5120 kubelet_node_status.go:127] "Node was previously registered" node="crc" Jan 22 11:48:55 crc kubenswrapper[5120]: I0122 11:48:55.311287 5120 kubelet_node_status.go:81] "Successfully registered node" node="crc" Jan 22 11:48:55 crc kubenswrapper[5120]: E0122 11:48:55.311317 5120 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Jan 22 11:48:55 crc kubenswrapper[5120]: I0122 11:48:55.314603 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:48:55 crc kubenswrapper[5120]: I0122 11:48:55.314646 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:48:55 crc kubenswrapper[5120]: I0122 11:48:55.314662 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:48:55 crc kubenswrapper[5120]: I0122 11:48:55.314681 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:48:55 crc kubenswrapper[5120]: I0122 11:48:55.314695 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:48:55Z","lastTransitionTime":"2026-01-22T11:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:48:55 crc kubenswrapper[5120]: E0122 11:48:55.332385 5120 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:48:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:48:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:48:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:48:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"60403ab6-2e1e-4736-9a34-cfc7e1924d0b\\\",\\\"systemUUID\\\":\\\"382cdad4-0171-4b64-8e1b-b8f3f02e6a19\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:48:55 crc kubenswrapper[5120]: I0122 11:48:55.340345 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:48:55 crc kubenswrapper[5120]: I0122 11:48:55.340408 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:48:55 crc kubenswrapper[5120]: I0122 11:48:55.340421 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:48:55 crc kubenswrapper[5120]: I0122 11:48:55.340444 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:48:55 crc kubenswrapper[5120]: I0122 11:48:55.340454 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:48:55Z","lastTransitionTime":"2026-01-22T11:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:48:55 crc kubenswrapper[5120]: E0122 11:48:55.350885 5120 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:48:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:48:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:48:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:48:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"60403ab6-2e1e-4736-9a34-cfc7e1924d0b\\\",\\\"systemUUID\\\":\\\"382cdad4-0171-4b64-8e1b-b8f3f02e6a19\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:48:55 crc kubenswrapper[5120]: I0122 11:48:55.359041 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:48:55 crc kubenswrapper[5120]: I0122 11:48:55.359108 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:48:55 crc kubenswrapper[5120]: I0122 11:48:55.359123 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:48:55 crc kubenswrapper[5120]: I0122 11:48:55.359142 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:48:55 crc kubenswrapper[5120]: I0122 11:48:55.359155 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:48:55Z","lastTransitionTime":"2026-01-22T11:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:48:55 crc kubenswrapper[5120]: E0122 11:48:55.373617 5120 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:48:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:48:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:48:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:48:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"60403ab6-2e1e-4736-9a34-cfc7e1924d0b\\\",\\\"systemUUID\\\":\\\"382cdad4-0171-4b64-8e1b-b8f3f02e6a19\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:48:55 crc kubenswrapper[5120]: I0122 11:48:55.382652 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:48:55 crc kubenswrapper[5120]: I0122 11:48:55.382682 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:48:55 crc kubenswrapper[5120]: I0122 11:48:55.382691 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:48:55 crc kubenswrapper[5120]: I0122 11:48:55.382707 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:48:55 crc kubenswrapper[5120]: I0122 11:48:55.382719 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:48:55Z","lastTransitionTime":"2026-01-22T11:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:48:55 crc kubenswrapper[5120]: E0122 11:48:55.396190 5120 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:48:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:48:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:48:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:48:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"60403ab6-2e1e-4736-9a34-cfc7e1924d0b\\\",\\\"systemUUID\\\":\\\"382cdad4-0171-4b64-8e1b-b8f3f02e6a19\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:48:55 crc kubenswrapper[5120]: E0122 11:48:55.396359 5120 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Jan 22 11:48:55 crc kubenswrapper[5120]: E0122 11:48:55.396388 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:55 crc kubenswrapper[5120]: E0122 11:48:55.497206 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:55 crc kubenswrapper[5120]: E0122 11:48:55.597826 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:55 crc kubenswrapper[5120]: E0122 11:48:55.619472 5120 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 22 11:48:55 crc kubenswrapper[5120]: E0122 11:48:55.698265 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:55 crc kubenswrapper[5120]: E0122 11:48:55.798714 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:55 crc kubenswrapper[5120]: E0122 11:48:55.899467 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:56 crc kubenswrapper[5120]: E0122 11:48:56.000249 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:56 crc kubenswrapper[5120]: E0122 11:48:56.100387 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:56 crc kubenswrapper[5120]: E0122 11:48:56.200697 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:56 crc kubenswrapper[5120]: E0122 11:48:56.301454 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:56 crc kubenswrapper[5120]: E0122 11:48:56.401921 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:56 crc kubenswrapper[5120]: E0122 11:48:56.502265 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:56 crc kubenswrapper[5120]: E0122 11:48:56.603158 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:56 crc kubenswrapper[5120]: E0122 11:48:56.703863 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:56 crc kubenswrapper[5120]: E0122 11:48:56.804813 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:56 crc kubenswrapper[5120]: E0122 11:48:56.905991 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:57 crc kubenswrapper[5120]: E0122 11:48:57.006288 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:57 crc kubenswrapper[5120]: E0122 11:48:57.107486 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:57 crc kubenswrapper[5120]: E0122 11:48:57.208595 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:57 crc kubenswrapper[5120]: E0122 11:48:57.308848 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:57 crc kubenswrapper[5120]: E0122 11:48:57.409994 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:57 crc kubenswrapper[5120]: E0122 11:48:57.510750 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:57 crc kubenswrapper[5120]: E0122 11:48:57.610855 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:57 crc kubenswrapper[5120]: E0122 11:48:57.711473 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:57 crc kubenswrapper[5120]: E0122 11:48:57.811551 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:57 crc kubenswrapper[5120]: E0122 11:48:57.912020 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:58 crc kubenswrapper[5120]: E0122 11:48:58.012729 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:58 crc kubenswrapper[5120]: E0122 11:48:58.113119 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:58 crc kubenswrapper[5120]: E0122 11:48:58.214147 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:58 crc kubenswrapper[5120]: E0122 11:48:58.314694 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:58 crc kubenswrapper[5120]: E0122 11:48:58.415156 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:58 crc kubenswrapper[5120]: E0122 11:48:58.516225 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:58 crc kubenswrapper[5120]: E0122 11:48:58.616643 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:58 crc kubenswrapper[5120]: E0122 11:48:58.716801 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:58 crc kubenswrapper[5120]: E0122 11:48:58.817462 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:58 crc kubenswrapper[5120]: E0122 11:48:58.918282 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:59 crc kubenswrapper[5120]: E0122 11:48:59.018731 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:59 crc kubenswrapper[5120]: E0122 11:48:59.119072 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:59 crc kubenswrapper[5120]: E0122 11:48:59.219986 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:59 crc kubenswrapper[5120]: E0122 11:48:59.320514 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:59 crc kubenswrapper[5120]: E0122 11:48:59.420837 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:59 crc kubenswrapper[5120]: E0122 11:48:59.520976 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:59 crc kubenswrapper[5120]: E0122 11:48:59.621775 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:59 crc kubenswrapper[5120]: E0122 11:48:59.722228 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:59 crc kubenswrapper[5120]: E0122 11:48:59.822786 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:59 crc kubenswrapper[5120]: I0122 11:48:59.873091 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:48:59 crc kubenswrapper[5120]: I0122 11:48:59.873376 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:48:59 crc kubenswrapper[5120]: I0122 11:48:59.874205 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:48:59 crc kubenswrapper[5120]: I0122 11:48:59.874269 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:48:59 crc kubenswrapper[5120]: I0122 11:48:59.874284 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:48:59 crc kubenswrapper[5120]: E0122 11:48:59.874782 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 11:48:59 crc kubenswrapper[5120]: I0122 11:48:59.875082 5120 scope.go:117] "RemoveContainer" containerID="99b634350c36056ac94a43bb1050fb0a41c21441966a10fdfe3aeae30cfd0c2f" Jan 22 11:48:59 crc kubenswrapper[5120]: E0122 11:48:59.875386 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 22 11:48:59 crc kubenswrapper[5120]: E0122 11:48:59.923935 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:49:00 crc kubenswrapper[5120]: E0122 11:49:00.024417 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:49:00 crc kubenswrapper[5120]: E0122 11:49:00.124580 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:49:00 crc kubenswrapper[5120]: E0122 11:49:00.224719 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:49:00 crc kubenswrapper[5120]: E0122 11:49:00.325818 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:49:00 crc kubenswrapper[5120]: E0122 11:49:00.426638 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:49:00 crc kubenswrapper[5120]: E0122 11:49:00.527732 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:49:00 crc kubenswrapper[5120]: E0122 11:49:00.627829 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:49:00 crc kubenswrapper[5120]: E0122 11:49:00.727995 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:49:00 crc kubenswrapper[5120]: E0122 11:49:00.829038 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:49:00 crc kubenswrapper[5120]: E0122 11:49:00.930008 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.030916 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.131215 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.231368 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.332161 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.432592 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.485608 5120 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.508889 5120 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.521732 5120 apiserver.go:52] "Watching apiserver" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.524067 5120 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.528083 5120 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.528696 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-wrdkl","openshift-multus/multus-4lzht","openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5","openshift-network-operator/iptables-alerter-5jnd7","openshift-machine-config-operator/machine-config-daemon-dq269","openshift-multus/multus-additional-cni-plugins-rg989","openshift-image-registry/node-ca-tf9nb","openshift-kube-apiserver/kube-apiserver-crc","openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6","openshift-network-node-identity/network-node-identity-dgvkt","openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv","openshift-ovn-kubernetes/ovnkube-node-2mf7v","openshift-multus/network-metrics-daemon-ldwx4","openshift-network-diagnostics/network-check-target-fhkjl","openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79"] Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.530166 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.532397 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.532478 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.532591 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.534536 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.534606 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.534626 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.534650 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.534669 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:01Z","lastTransitionTime":"2026-01-22T11:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.535325 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.536046 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.536488 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.536564 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.537611 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.538874 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.543176 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.544060 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.544764 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.545913 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.546252 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.546438 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.547778 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.549126 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.554018 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.570419 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.585427 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.597538 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.608805 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.621463 5120 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.623629 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-wrdkl" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.626363 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.629380 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.630391 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-rg989" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.632769 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.633589 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.633557 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.633588 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.633881 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.634025 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.634086 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.634178 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.636137 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-tf9nb" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.638596 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.638635 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.638646 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.638664 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.638679 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:01Z","lastTransitionTime":"2026-01-22T11:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.640364 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.640564 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.640671 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.640901 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.647482 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-dq269" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.650361 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.652316 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.653527 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.653752 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.653797 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.653861 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.653835 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.654014 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.654160 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.654045 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.654431 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.654457 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.655513 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.655583 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.655667 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ldwx4" podUID="dababdca-8afb-452f-865f-54de3aec21d9" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.655847 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.658040 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.658099 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.658146 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.659686 5120 scope.go:117] "RemoveContainer" containerID="99b634350c36056ac94a43bb1050fb0a41c21441966a10fdfe3aeae30cfd0c2f" Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.660255 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.663796 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.667300 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.667359 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.667540 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.679258 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.686044 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.686086 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/eaa5719f-fed8-44ac-a759-d2c22d9a2a7f-hosts-file\") pod \"node-resolver-wrdkl\" (UID: \"eaa5719f-fed8-44ac-a759-d2c22d9a2a7f\") " pod="openshift-dns/node-resolver-wrdkl" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.686117 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.686140 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.686171 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.686195 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.686214 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.686233 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.686292 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.686310 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.686329 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.686353 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.686371 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.686392 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/eaa5719f-fed8-44ac-a759-d2c22d9a2a7f-tmp-dir\") pod \"node-resolver-wrdkl\" (UID: \"eaa5719f-fed8-44ac-a759-d2c22d9a2a7f\") " pod="openshift-dns/node-resolver-wrdkl" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.686412 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgcrk\" (UniqueName: \"kubernetes.io/projected/eaa5719f-fed8-44ac-a759-d2c22d9a2a7f-kube-api-access-dgcrk\") pod \"node-resolver-wrdkl\" (UID: \"eaa5719f-fed8-44ac-a759-d2c22d9a2a7f\") " pod="openshift-dns/node-resolver-wrdkl" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.686485 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.686512 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.687176 5120 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.687298 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:02.187270179 +0000 UTC m=+76.931218530 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.687460 5120 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.687544 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:02.187528025 +0000 UTC m=+76.931476556 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.687816 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.688281 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.689068 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.690741 5120 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.693519 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scbgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scbgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dq269\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.706222 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.706321 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.706348 5120 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.706514 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:02.20647829 +0000 UTC m=+76.950426671 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.707396 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.707450 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.707473 5120 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.707562 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:02.207525295 +0000 UTC m=+76.951473676 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.708020 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.708392 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ldwx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dababdca-8afb-452f-865f-54de3aec21d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kndcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kndcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ldwx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.709664 5120 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.712228 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.716798 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.717521 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.721006 5120 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.723197 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.723295 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.725114 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.726137 5120 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.738099 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.741126 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.741161 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.741172 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.741190 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.741202 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:01Z","lastTransitionTime":"2026-01-22T11:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.758837 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2mf7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.775186 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.785638 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tf9nb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9f485fd-0793-40a0-abf8-12fd3b612c87\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdqkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tf9nb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.786871 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787104 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787167 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787198 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787218 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787232 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787251 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787267 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787283 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787298 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787316 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787397 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787412 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787424 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787430 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787477 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787498 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787514 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787531 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787548 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787564 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787579 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787596 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787613 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787636 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787652 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787691 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787708 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787726 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787742 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787760 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787774 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787792 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787806 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787824 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787841 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787889 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787906 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787921 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787937 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787976 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788000 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788017 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788054 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788070 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788085 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788100 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788115 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788131 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788150 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788170 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788186 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788203 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788218 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788235 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788252 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788267 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788282 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788297 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788317 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788333 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788349 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788366 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788384 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788404 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788422 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788441 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788459 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788483 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788500 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788515 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788531 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788548 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788565 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788580 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788597 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788615 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788632 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788650 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788666 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788683 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788700 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788718 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788736 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788753 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788770 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788787 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788807 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788824 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788841 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788857 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788877 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788893 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788909 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788926 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788942 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788975 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788994 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.789012 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.789031 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.789050 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.789068 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.789088 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.789724 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.789832 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.789892 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.789944 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.790021 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.790082 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.790355 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.790400 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.790436 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.790480 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.790522 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.790555 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.790646 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.793702 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.793761 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.793807 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.793848 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.793882 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.793924 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.793981 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.794024 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.794078 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.794603 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.794677 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.794716 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.794753 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.794789 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.794974 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.795013 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.795083 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.795253 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788399 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" (OuterVolumeSpecName: "config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.795372 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788603 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" (OuterVolumeSpecName: "tmp") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788666 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.789067 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.789301 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" (OuterVolumeSpecName: "kube-api-access-mjwtd") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "kube-api-access-mjwtd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.789700 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.789978 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" (OuterVolumeSpecName: "kube-api-access-ptkcf") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "kube-api-access-ptkcf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.790003 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" (OuterVolumeSpecName: "kube-api-access-ddlk9") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "kube-api-access-ddlk9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.790117 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.790102 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" (OuterVolumeSpecName: "utilities") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.790127 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" (OuterVolumeSpecName: "kube-api-access-26xrl") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "kube-api-access-26xrl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.790710 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.790755 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.791112 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" (OuterVolumeSpecName: "kube-api-access-5lcfw") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "kube-api-access-5lcfw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.791126 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.791436 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.791625 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.791814 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" (OuterVolumeSpecName: "kube-api-access-dztfv") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "kube-api-access-dztfv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.791820 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.791878 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" (OuterVolumeSpecName: "kube-api-access-pgx6b") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "kube-api-access-pgx6b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.791839 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.792183 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.792276 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" (OuterVolumeSpecName: "kube-api-access-d4tqq") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "kube-api-access-d4tqq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.792297 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.792338 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.792625 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" (OuterVolumeSpecName: "kube-api-access-qgrkj") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "kube-api-access-qgrkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.792654 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.792675 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.793255 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.793441 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.793461 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.793767 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.794085 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" (OuterVolumeSpecName: "kube-api-access-l9stx") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "kube-api-access-l9stx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.794099 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.794168 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.794219 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.794464 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.794479 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" (OuterVolumeSpecName: "client-ca") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.794760 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.795629 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.794762 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" (OuterVolumeSpecName: "kube-api-access-rzt4w") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "kube-api-access-rzt4w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.795079 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.795152 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.795760 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.795475 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.796075 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.796575 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" (OuterVolumeSpecName: "config-volume") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.796846 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" (OuterVolumeSpecName: "kube-api-access-zg8nc") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "kube-api-access-zg8nc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.797090 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.797156 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.797145 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" (OuterVolumeSpecName: "service-ca") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.797349 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.797453 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" (OuterVolumeSpecName: "config") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.797605 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.797775 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.797824 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.797850 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.796900 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" (OuterVolumeSpecName: "tmp") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.798152 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" (OuterVolumeSpecName: "kube-api-access-ftwb6") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "kube-api-access-ftwb6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.798492 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.798823 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.799111 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" (OuterVolumeSpecName: "config") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.799456 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.799523 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.799899 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" (OuterVolumeSpecName: "kube-api-access-m5lgh") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "kube-api-access-m5lgh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.800311 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" (OuterVolumeSpecName: "kube-api-access-xfp5s") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "kube-api-access-xfp5s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.800422 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.800584 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.800482 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.800856 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.800911 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.801025 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" (OuterVolumeSpecName: "config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.801181 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.801221 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" (OuterVolumeSpecName: "kube-api-access-99zj9") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "kube-api-access-99zj9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.801260 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" (OuterVolumeSpecName: "config") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.801331 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" (OuterVolumeSpecName: "kube-api-access-xxfcv") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "kube-api-access-xxfcv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.801495 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" (OuterVolumeSpecName: "kube-api-access-z5rsr") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "kube-api-access-z5rsr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.801717 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.801929 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.802314 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" (OuterVolumeSpecName: "kube-api-access-4hb7m") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "kube-api-access-4hb7m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.802526 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" (OuterVolumeSpecName: "certs") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.802714 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.802880 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" (OuterVolumeSpecName: "kube-api-access-ks6v2") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "kube-api-access-ks6v2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.802904 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.803092 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.803156 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" (OuterVolumeSpecName: "utilities") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.803373 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" (OuterVolumeSpecName: "kube-api-access-8nb9c") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "kube-api-access-8nb9c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.803418 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.803550 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.803589 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" (OuterVolumeSpecName: "kube-api-access-q4smf") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "kube-api-access-q4smf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.803671 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" (OuterVolumeSpecName: "kube-api-access-m26jq") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "kube-api-access-m26jq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.803600 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" (OuterVolumeSpecName: "kube-api-access-grwfz") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "kube-api-access-grwfz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.803656 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.803719 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.804038 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.804130 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.804161 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.804208 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.804233 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.804482 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.804819 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.804977 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" (OuterVolumeSpecName: "kube-api-access-ws8zz") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "kube-api-access-ws8zz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.805095 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.805599 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" (OuterVolumeSpecName: "kube-api-access-zth6t") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "kube-api-access-zth6t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.805608 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" (OuterVolumeSpecName: "serviceca") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.805935 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.806100 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.806147 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.806657 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.806799 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" (OuterVolumeSpecName: "tmp") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.806903 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.806951 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.807024 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.807053 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.807082 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.807111 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.807144 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.807173 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.807201 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.807227 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.807256 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.807284 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") pod \"e093be35-bb62-4843-b2e8-094545761610\" (UID: \"e093be35-bb62-4843-b2e8-094545761610\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.807310 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.807341 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.807372 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.807403 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.807429 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.807455 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.807480 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.807583 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.807607 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.807629 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.807650 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.807672 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.807693 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.807717 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") pod \"0effdbcf-dd7d-404d-9d48-77536d665a5d\" (UID: \"0effdbcf-dd7d-404d-9d48-77536d665a5d\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.807741 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") pod \"af41de71-79cf-4590-bbe9-9e8b848862cb\" (UID: \"af41de71-79cf-4590-bbe9-9e8b848862cb\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.807763 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.807784 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.807808 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.807832 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.807863 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.807890 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.807914 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.807937 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.807992 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808020 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808045 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808076 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808100 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808124 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808155 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808188 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808217 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808241 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808265 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808291 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808320 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808343 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808369 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808394 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808418 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808443 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808469 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808498 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808523 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808550 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808573 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808605 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808632 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808657 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808685 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808710 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808744 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808772 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808797 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808824 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808855 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808880 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808905 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808934 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808978 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.809007 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.806908 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" (OuterVolumeSpecName: "kube-api-access-twvbl") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "kube-api-access-twvbl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.807233 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" (OuterVolumeSpecName: "console-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.809102 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.807851 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808085 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808221 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808421 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808648 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808812 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.809246 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" (OuterVolumeSpecName: "kube-api-access-7jjkz") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "kube-api-access-7jjkz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.809140 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.809378 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" (OuterVolumeSpecName: "images") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.809425 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.809574 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.809604 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.809746 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.809785 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.809919 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" (OuterVolumeSpecName: "kube-api-access-4g8ts") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "kube-api-access-4g8ts". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.809947 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" (OuterVolumeSpecName: "kube-api-access-d7cps") pod "af41de71-79cf-4590-bbe9-9e8b848862cb" (UID: "af41de71-79cf-4590-bbe9-9e8b848862cb"). InnerVolumeSpecName "kube-api-access-d7cps". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.810134 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.810478 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.810505 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.810533 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.810828 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.811115 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" (OuterVolumeSpecName: "kube-api-access-mfzkj") pod "0effdbcf-dd7d-404d-9d48-77536d665a5d" (UID: "0effdbcf-dd7d-404d-9d48-77536d665a5d"). InnerVolumeSpecName "kube-api-access-mfzkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.811176 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" (OuterVolumeSpecName: "kube-api-access-tknt7") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "kube-api-access-tknt7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.811239 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" (OuterVolumeSpecName: "kube-api-access-tkdh6") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "kube-api-access-tkdh6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.811290 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" (OuterVolumeSpecName: "kube-api-access-pddnv") pod "e093be35-bb62-4843-b2e8-094545761610" (UID: "e093be35-bb62-4843-b2e8-094545761610"). InnerVolumeSpecName "kube-api-access-pddnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.811348 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" (OuterVolumeSpecName: "config") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.811420 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.811374 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.811535 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" (OuterVolumeSpecName: "kube-api-access-sbc2l") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "kube-api-access-sbc2l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.809052 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" (OuterVolumeSpecName: "kube-api-access-wj4qr") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "kube-api-access-wj4qr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.811573 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.811774 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.811808 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" (OuterVolumeSpecName: "config") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.811823 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.811847 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.811869 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.811869 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" (OuterVolumeSpecName: "kube-api-access-9vsz9") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "kube-api-access-9vsz9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.812006 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.812053 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/eaa5719f-fed8-44ac-a759-d2c22d9a2a7f-hosts-file\") pod \"node-resolver-wrdkl\" (UID: \"eaa5719f-fed8-44ac-a759-d2c22d9a2a7f\") " pod="openshift-dns/node-resolver-wrdkl" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.812107 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-etc-kubernetes\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.812147 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/97df0621-ddba-4462-8134-59bc671c7351-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rg989\" (UID: \"97df0621-ddba-4462-8134-59bc671c7351\") " pod="openshift-multus/multus-additional-cni-plugins-rg989" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.812184 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-env-overrides\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.812237 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-multus-cni-dir\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.812278 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-run-netns\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.812351 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/97df0621-ddba-4462-8134-59bc671c7351-system-cni-dir\") pod \"multus-additional-cni-plugins-rg989\" (UID: \"97df0621-ddba-4462-8134-59bc671c7351\") " pod="openshift-multus/multus-additional-cni-plugins-rg989" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.812397 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/97df0621-ddba-4462-8134-59bc671c7351-cni-binary-copy\") pod \"multus-additional-cni-plugins-rg989\" (UID: \"97df0621-ddba-4462-8134-59bc671c7351\") " pod="openshift-multus/multus-additional-cni-plugins-rg989" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.812432 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/97df0621-ddba-4462-8134-59bc671c7351-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rg989\" (UID: \"97df0621-ddba-4462-8134-59bc671c7351\") " pod="openshift-multus/multus-additional-cni-plugins-rg989" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.812470 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdqkj\" (UniqueName: \"kubernetes.io/projected/f9f485fd-0793-40a0-abf8-12fd3b612c87-kube-api-access-wdqkj\") pod \"node-ca-tf9nb\" (UID: \"f9f485fd-0793-40a0-abf8-12fd3b612c87\") " pod="openshift-image-registry/node-ca-tf9nb" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.812502 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cdb50da0-eb06-4959-b8da-70919924f77e-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-xzh79\" (UID: \"cdb50da0-eb06-4959-b8da-70919924f77e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.813362 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/eaa5719f-fed8-44ac-a759-d2c22d9a2a7f-tmp-dir\") pod \"node-resolver-wrdkl\" (UID: \"eaa5719f-fed8-44ac-a759-d2c22d9a2a7f\") " pod="openshift-dns/node-resolver-wrdkl" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.813591 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dgcrk\" (UniqueName: \"kubernetes.io/projected/eaa5719f-fed8-44ac-a759-d2c22d9a2a7f-kube-api-access-dgcrk\") pod \"node-resolver-wrdkl\" (UID: \"eaa5719f-fed8-44ac-a759-d2c22d9a2a7f\") " pod="openshift-dns/node-resolver-wrdkl" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.813655 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-multus-socket-dir-parent\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.813680 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-host-var-lib-cni-multus\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.813706 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-system-cni-dir\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.813763 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-host-var-lib-kubelet\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.813783 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-run-ovn\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.813800 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-cni-bin\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.813822 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scbgq\" (UniqueName: \"kubernetes.io/projected/90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9-kube-api-access-scbgq\") pod \"machine-config-daemon-dq269\" (UID: \"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9\") " pod="openshift-machine-config-operator/machine-config-daemon-dq269" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.813892 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.813980 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kndcw\" (UniqueName: \"kubernetes.io/projected/dababdca-8afb-452f-865f-54de3aec21d9-kube-api-access-kndcw\") pod \"network-metrics-daemon-ldwx4\" (UID: \"dababdca-8afb-452f-865f-54de3aec21d9\") " pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.814003 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-kubelet\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.814044 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-cni-netd\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.814067 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.814090 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-ovn-node-metrics-cert\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.815674 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dababdca-8afb-452f-865f-54de3aec21d9-metrics-certs\") pod \"network-metrics-daemon-ldwx4\" (UID: \"dababdca-8afb-452f-865f-54de3aec21d9\") " pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.815767 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-cnibin\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.815814 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-hostroot\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.812239 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" (OuterVolumeSpecName: "tmp") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.812440 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.812664 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" (OuterVolumeSpecName: "kube-api-access-hckvg") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "kube-api-access-hckvg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.812726 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" (OuterVolumeSpecName: "audit") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.812859 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" (OuterVolumeSpecName: "images") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.813180 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.814485 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" (OuterVolumeSpecName: "client-ca") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.814693 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" (OuterVolumeSpecName: "whereabouts-flatfile-configmap") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "whereabouts-flatfile-configmap". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.814851 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" (OuterVolumeSpecName: "kube-api-access-qqbfk") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "kube-api-access-qqbfk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.814735 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" (OuterVolumeSpecName: "utilities") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.814996 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.815042 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" (OuterVolumeSpecName: "kube-api-access-wbmqg") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "kube-api-access-wbmqg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.815434 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.815558 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.815576 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.816622 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz7fj\" (UniqueName: \"kubernetes.io/projected/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-kube-api-access-zz7fj\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.816768 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/eaa5719f-fed8-44ac-a759-d2c22d9a2a7f-tmp-dir\") pod \"node-resolver-wrdkl\" (UID: \"eaa5719f-fed8-44ac-a759-d2c22d9a2a7f\") " pod="openshift-dns/node-resolver-wrdkl" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.816860 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-run-openvswitch\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.815739 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" (OuterVolumeSpecName: "config") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.815884 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" (OuterVolumeSpecName: "kube-api-access-8pskd") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "kube-api-access-8pskd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.816229 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.816242 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" (OuterVolumeSpecName: "ca-trust-extracted-pem") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "ca-trust-extracted-pem". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.816427 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.816710 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.816784 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.816789 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" (OuterVolumeSpecName: "utilities") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.817157 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.818015 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.818246 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.818506 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.818707 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" (OuterVolumeSpecName: "kube-api-access-nmmzf") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "kube-api-access-nmmzf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.818892 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.819017 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.819424 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" (OuterVolumeSpecName: "kube-api-access-l87hs") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "kube-api-access-l87hs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.819499 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-var-lib-openvswitch\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.819654 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f9f485fd-0793-40a0-abf8-12fd3b612c87-serviceca\") pod \"node-ca-tf9nb\" (UID: \"f9f485fd-0793-40a0-abf8-12fd3b612c87\") " pod="openshift-image-registry/node-ca-tf9nb" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.819699 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-host-run-netns\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.819734 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-slash\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.819764 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-ovnkube-config\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.819801 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f9f485fd-0793-40a0-abf8-12fd3b612c87-host\") pod \"node-ca-tf9nb\" (UID: \"f9f485fd-0793-40a0-abf8-12fd3b612c87\") " pod="openshift-image-registry/node-ca-tf9nb" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.819844 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cdb50da0-eb06-4959-b8da-70919924f77e-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-xzh79\" (UID: \"cdb50da0-eb06-4959-b8da-70919924f77e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.819906 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.820033 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-systemd-units\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.819603 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.820142 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.820305 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" (OuterVolumeSpecName: "kube-api-access-w94wk") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "kube-api-access-w94wk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.820341 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" (OuterVolumeSpecName: "config") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.820361 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.820886 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.821024 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.821638 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" (OuterVolumeSpecName: "kube-api-access-6dmhf") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "kube-api-access-6dmhf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.821919 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-host-var-lib-cni-bin\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.821754 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" (OuterVolumeSpecName: "config") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.821762 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" (OuterVolumeSpecName: "config") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.822198 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:49:02.322140653 +0000 UTC m=+77.066089024 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.822418 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-log-socket\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.822467 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-ovnkube-script-lib\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.822501 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9-rootfs\") pod \"machine-config-daemon-dq269\" (UID: \"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9\") " pod="openshift-machine-config-operator/machine-config-daemon-dq269" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.822548 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-host-run-k8s-cni-cncf-io\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.822582 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-multus-conf-dir\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.822614 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/97df0621-ddba-4462-8134-59bc671c7351-cnibin\") pod \"multus-additional-cni-plugins-rg989\" (UID: \"97df0621-ddba-4462-8134-59bc671c7351\") " pod="openshift-multus/multus-additional-cni-plugins-rg989" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.822608 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" (OuterVolumeSpecName: "kube-api-access-hm9x7") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "kube-api-access-hm9x7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.822649 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/97df0621-ddba-4462-8134-59bc671c7351-os-release\") pod \"multus-additional-cni-plugins-rg989\" (UID: \"97df0621-ddba-4462-8134-59bc671c7351\") " pod="openshift-multus/multus-additional-cni-plugins-rg989" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.822684 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/97df0621-ddba-4462-8134-59bc671c7351-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-rg989\" (UID: \"97df0621-ddba-4462-8134-59bc671c7351\") " pod="openshift-multus/multus-additional-cni-plugins-rg989" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.822715 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-etc-openvswitch\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.822753 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-host-run-multus-certs\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.822791 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-node-log\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.822821 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdzrm\" (UniqueName: \"kubernetes.io/projected/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-kube-api-access-zdzrm\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.822857 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9-mcd-auth-proxy-config\") pod \"machine-config-daemon-dq269\" (UID: \"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9\") " pod="openshift-machine-config-operator/machine-config-daemon-dq269" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.822886 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lt4m\" (UniqueName: \"kubernetes.io/projected/cdb50da0-eb06-4959-b8da-70919924f77e-kube-api-access-9lt4m\") pod \"ovnkube-control-plane-57b78d8988-xzh79\" (UID: \"cdb50da0-eb06-4959-b8da-70919924f77e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.822923 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-os-release\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.822976 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs4xp\" (UniqueName: \"kubernetes.io/projected/97df0621-ddba-4462-8134-59bc671c7351-kube-api-access-cs4xp\") pod \"multus-additional-cni-plugins-rg989\" (UID: \"97df0621-ddba-4462-8134-59bc671c7351\") " pod="openshift-multus/multus-additional-cni-plugins-rg989" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.823012 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-run-systemd\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.823046 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-run-ovn-kubernetes\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.823075 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9-proxy-tls\") pod \"machine-config-daemon-dq269\" (UID: \"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9\") " pod="openshift-machine-config-operator/machine-config-daemon-dq269" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.823105 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cdb50da0-eb06-4959-b8da-70919924f77e-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-xzh79\" (UID: \"cdb50da0-eb06-4959-b8da-70919924f77e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.823159 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-cni-binary-copy\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.823192 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-multus-daemon-config\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.823359 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.823396 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.823416 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.823436 5120 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.823457 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.823476 5120 reconciler_common.go:299] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.823495 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.823515 5120 reconciler_common.go:299] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.820233 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/eaa5719f-fed8-44ac-a759-d2c22d9a2a7f-hosts-file\") pod \"node-resolver-wrdkl\" (UID: \"eaa5719f-fed8-44ac-a759-d2c22d9a2a7f\") " pod="openshift-dns/node-resolver-wrdkl" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.823555 5120 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.822651 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" (OuterVolumeSpecName: "config") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.822882 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.823115 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.823216 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.823325 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.823610 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" (OuterVolumeSpecName: "kube-api-access-6g4lr") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "kube-api-access-6g4lr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.823705 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" (OuterVolumeSpecName: "kube-api-access-8nspp") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "kube-api-access-8nspp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.823622 5120 reconciler_common.go:299] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.823812 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.823887 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.823927 5120 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.823949 5120 reconciler_common.go:299] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.823995 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" (OuterVolumeSpecName: "utilities") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824008 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824047 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824110 5120 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-etcd/etcd-crc" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824133 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824089 5120 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824220 5120 reconciler_common.go:299] "Volume detached for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824238 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824256 5120 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824271 5120 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824286 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824282 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" (OuterVolumeSpecName: "utilities") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824302 5120 reconciler_common.go:299] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824321 5120 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824338 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824352 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824366 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824379 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824394 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824398 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824410 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824438 5120 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824450 5120 reconciler_common.go:299] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824451 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824463 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824476 5120 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824490 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824506 5120 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824518 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824531 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824488 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824543 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824556 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824568 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824579 5120 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824591 5120 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824603 5120 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824605 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" (OuterVolumeSpecName: "kube-api-access-94l9h") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "kube-api-access-94l9h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824617 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824681 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824761 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824782 5120 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824800 5120 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824818 5120 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824836 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824854 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825025 5120 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825124 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" (OuterVolumeSpecName: "service-ca") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825196 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825221 5120 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825238 5120 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825251 5120 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825359 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825383 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825403 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825425 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825443 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825462 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825481 5120 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825500 5120 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825518 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825537 5120 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825556 5120 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825574 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825574 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" (OuterVolumeSpecName: "kube-api-access-6rmnv") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "kube-api-access-6rmnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825592 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825615 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825633 5120 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825651 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825669 5120 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825688 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825708 5120 reconciler_common.go:299] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825727 5120 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825752 5120 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825759 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" (OuterVolumeSpecName: "kube-api-access-9z4sw") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "kube-api-access-9z4sw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825775 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825798 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825816 5120 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825834 5120 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825851 5120 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825869 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825885 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825902 5120 reconciler_common.go:299] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825920 5120 reconciler_common.go:299] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825942 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825995 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826015 5120 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826032 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826050 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826068 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826088 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826105 5120 reconciler_common.go:299] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826122 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826140 5120 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826157 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826213 5120 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826233 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826252 5120 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826269 5120 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826286 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826307 5120 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826325 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826343 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826359 5120 reconciler_common.go:299] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826375 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826392 5120 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826410 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826429 5120 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826453 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826470 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826488 5120 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826505 5120 reconciler_common.go:299] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826521 5120 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826538 5120 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826555 5120 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826571 5120 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826587 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826605 5120 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826623 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826641 5120 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826663 5120 reconciler_common.go:299] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826680 5120 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826698 5120 reconciler_common.go:299] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826719 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826736 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826753 5120 reconciler_common.go:299] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826770 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826787 5120 reconciler_common.go:299] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826805 5120 reconciler_common.go:299] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826822 5120 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826839 5120 reconciler_common.go:299] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826858 5120 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826875 5120 reconciler_common.go:299] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826892 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826909 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826925 5120 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826943 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826982 5120 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.827000 5120 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.827018 5120 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.827035 5120 reconciler_common.go:299] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.827052 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.827070 5120 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.827088 5120 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.827106 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.827124 5120 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.827141 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.827159 5120 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.827178 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.827195 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.827212 5120 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.827229 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.827246 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.827265 5120 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.827282 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.827301 5120 reconciler_common.go:299] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.827320 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.827337 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.827354 5120 reconciler_common.go:299] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.827371 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.827387 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.827402 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.827418 5120 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.827435 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.827452 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.827469 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.827486 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.827504 5120 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.827521 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.827539 5120 reconciler_common.go:299] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.827557 5120 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.828010 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" (OuterVolumeSpecName: "kube-api-access-pllx6") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "kube-api-access-pllx6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.828027 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.828469 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" (OuterVolumeSpecName: "config") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.830894 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-wrdkl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eaa5719f-fed8-44ac-a759-d2c22d9a2a7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgcrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wrdkl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.834388 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" (OuterVolumeSpecName: "utilities") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.834445 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" (OuterVolumeSpecName: "utilities") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.834546 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.835001 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.835918 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.836157 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" (OuterVolumeSpecName: "config") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.836198 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" (OuterVolumeSpecName: "tmp") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.836322 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" (OuterVolumeSpecName: "kube-api-access-xnxbn") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "kube-api-access-xnxbn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.836377 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" (OuterVolumeSpecName: "cert") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.836536 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.837106 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.837183 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.837296 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.837556 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" (OuterVolumeSpecName: "config") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.837903 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.840232 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.840232 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" (OuterVolumeSpecName: "tmp") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.840583 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.840685 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.841123 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.841142 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" (OuterVolumeSpecName: "kube-api-access-zsb9b") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "kube-api-access-zsb9b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.841739 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" (OuterVolumeSpecName: "signing-key") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.846641 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.846852 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.847050 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgcrk\" (UniqueName: \"kubernetes.io/projected/eaa5719f-fed8-44ac-a759-d2c22d9a2a7f-kube-api-access-dgcrk\") pod \"node-resolver-wrdkl\" (UID: \"eaa5719f-fed8-44ac-a759-d2c22d9a2a7f\") " pod="openshift-dns/node-resolver-wrdkl" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.847193 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.847346 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rg989" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97df0621-ddba-4462-8134-59bc671c7351\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rg989\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.852437 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.852495 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.852515 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.852548 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.852576 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:01Z","lastTransitionTime":"2026-01-22T11:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.859842 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.861046 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.862083 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.865714 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410ef417-8c38-4aac-9a75-c1a938b0cf8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://911cf90f454467de717e1f9bb20b825a5be262103e70d8507cf0069f6044f56c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3ebb490a3adef5a0bb92ba36215125157bd696a19543743e029f6ef8d7ddaf9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://64d17043c5bd9fe7e126416520a376da7a3779ed00b20eb4d36e1651e0e4deb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://99b634350c36056ac94a43bb1050fb0a41c21441966a10fdfe3aeae30cfd0c2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99b634350c36056ac94a43bb1050fb0a41c21441966a10fdfe3aeae30cfd0c2f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T11:48:52Z\\\",\\\"message\\\":\\\"var.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"ClientsAllowCBOR\\\\\\\" enabled=false\\\\nW0122 11:48:51.105406 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 11:48:51.105599 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0122 11:48:51.106804 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1158037108/tls.crt::/tmp/serving-cert-1158037108/tls.key\\\\\\\"\\\\nI0122 11:48:52.103234 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 11:48:52.104987 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 11:48:52.105003 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 11:48:52.105030 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 11:48:52.105035 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 11:48:52.112491 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 11:48:52.112515 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 11:48:52.112520 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 11:48:52.112524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 11:48:52.112528 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 11:48:52.112531 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 11:48:52.112534 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 11:48:52.112540 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 11:48:52.115022 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:48:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fbc73082c8fc6e4c53f063e1d1446fff9c541a208f3ab11d7c687b5b06958ebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8940990b4eeab47177be3a76a9fc4894d28308e94e4c45050915ec740b778a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8940990b4eeab47177be3a76a9fc4894d28308e94e4c45050915ec740b778a41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.867717 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.879114 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4822d3cd-955f-493d-a818-acebb52b3602\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://caf1ed97ccb35c8ce9c3321194645452c5875bdadb4b2634d00114c1cedc1056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://91363fceef321ca9f1495cd188f848fae974f94b1b5732adbab842efc578074c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ad731d2d8530eae95dec603d9f7a060ea885c926d453b983464949e2eb4fc2d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d3ad48ffe8f14cdb9c09a6ed7b7da5d4db116a1dac0653103da063524734f466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3ad48ffe8f14cdb9c09a6ed7b7da5d4db116a1dac0653103da063524734f466\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.882005 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.886038 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.887650 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.890266 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7027ae84-efaa-474d-9221-28d77dc0af15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fa31f4d5e4e6f36d31ea882d29804b21ad3c620e6f31cf12aec3085ed0f9f9b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0f232b2402a84370f16fcd5fe49fb57391d5d49d1df96442b937914a9ad6ad54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f232b2402a84370f16fcd5fe49fb57391d5d49d1df96442b937914a9ad6ad54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.890744 5120 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 22 11:49:01 crc kubenswrapper[5120]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Jan 22 11:49:01 crc kubenswrapper[5120]: set -o allexport Jan 22 11:49:01 crc kubenswrapper[5120]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Jan 22 11:49:01 crc kubenswrapper[5120]: source /etc/kubernetes/apiserver-url.env Jan 22 11:49:01 crc kubenswrapper[5120]: else Jan 22 11:49:01 crc kubenswrapper[5120]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Jan 22 11:49:01 crc kubenswrapper[5120]: exit 1 Jan 22 11:49:01 crc kubenswrapper[5120]: fi Jan 22 11:49:01 crc kubenswrapper[5120]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Jan 22 11:49:01 crc kubenswrapper[5120]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 22 11:49:01 crc kubenswrapper[5120]: > logger="UnhandledError" Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.891894 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Jan 22 11:49:01 crc kubenswrapper[5120]: W0122 11:49:01.898171 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod428b39f5_eb1c_4f65_b7a4_eeb6e84860cc.slice/crio-89fd87fcbdb16db0a35262776e2e8cda8e268b9cf22471a8b0af91d17737aa56 WatchSource:0}: Error finding container 89fd87fcbdb16db0a35262776e2e8cda8e268b9cf22471a8b0af91d17737aa56: Status 404 returned error can't find the container with id 89fd87fcbdb16db0a35262776e2e8cda8e268b9cf22471a8b0af91d17737aa56 Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.898578 5120 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 22 11:49:01 crc kubenswrapper[5120]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 22 11:49:01 crc kubenswrapper[5120]: if [[ -f "/env/_master" ]]; then Jan 22 11:49:01 crc kubenswrapper[5120]: set -o allexport Jan 22 11:49:01 crc kubenswrapper[5120]: source "/env/_master" Jan 22 11:49:01 crc kubenswrapper[5120]: set +o allexport Jan 22 11:49:01 crc kubenswrapper[5120]: fi Jan 22 11:49:01 crc kubenswrapper[5120]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Jan 22 11:49:01 crc kubenswrapper[5120]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Jan 22 11:49:01 crc kubenswrapper[5120]: ho_enable="--enable-hybrid-overlay" Jan 22 11:49:01 crc kubenswrapper[5120]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Jan 22 11:49:01 crc kubenswrapper[5120]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Jan 22 11:49:01 crc kubenswrapper[5120]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Jan 22 11:49:01 crc kubenswrapper[5120]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 22 11:49:01 crc kubenswrapper[5120]: --webhook-cert-dir="/etc/webhook-cert" \ Jan 22 11:49:01 crc kubenswrapper[5120]: --webhook-host=127.0.0.1 \ Jan 22 11:49:01 crc kubenswrapper[5120]: --webhook-port=9743 \ Jan 22 11:49:01 crc kubenswrapper[5120]: ${ho_enable} \ Jan 22 11:49:01 crc kubenswrapper[5120]: --enable-interconnect \ Jan 22 11:49:01 crc kubenswrapper[5120]: --disable-approver \ Jan 22 11:49:01 crc kubenswrapper[5120]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Jan 22 11:49:01 crc kubenswrapper[5120]: --wait-for-kubernetes-api=200s \ Jan 22 11:49:01 crc kubenswrapper[5120]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Jan 22 11:49:01 crc kubenswrapper[5120]: --loglevel="${LOGLEVEL}" Jan 22 11:49:01 crc kubenswrapper[5120]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 22 11:49:01 crc kubenswrapper[5120]: > logger="UnhandledError" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.901459 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.902183 5120 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 22 11:49:01 crc kubenswrapper[5120]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 22 11:49:01 crc kubenswrapper[5120]: if [[ -f "/env/_master" ]]; then Jan 22 11:49:01 crc kubenswrapper[5120]: set -o allexport Jan 22 11:49:01 crc kubenswrapper[5120]: source "/env/_master" Jan 22 11:49:01 crc kubenswrapper[5120]: set +o allexport Jan 22 11:49:01 crc kubenswrapper[5120]: fi Jan 22 11:49:01 crc kubenswrapper[5120]: Jan 22 11:49:01 crc kubenswrapper[5120]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Jan 22 11:49:01 crc kubenswrapper[5120]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 22 11:49:01 crc kubenswrapper[5120]: --disable-webhook \ Jan 22 11:49:01 crc kubenswrapper[5120]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Jan 22 11:49:01 crc kubenswrapper[5120]: --loglevel="${LOGLEVEL}" Jan 22 11:49:01 crc kubenswrapper[5120]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 22 11:49:01 crc kubenswrapper[5120]: > logger="UnhandledError" Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.903365 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.903759 5120 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.905061 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.908253 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"89fd87fcbdb16db0a35262776e2e8cda8e268b9cf22471a8b0af91d17737aa56"} Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.909521 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"2e04559dec16ab7018539cbee7830f09441da9e974cd81d09aceb8b51db915ee"} Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.910230 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tf9nb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9f485fd-0793-40a0-abf8-12fd3b612c87\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdqkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tf9nb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.910838 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"0dd60b61ffd0d4d6a32efceb6f2e8ab66bb020d554438a82321a2b3ac810a3e0"} Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.911225 5120 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 22 11:49:01 crc kubenswrapper[5120]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 22 11:49:01 crc kubenswrapper[5120]: if [[ -f "/env/_master" ]]; then Jan 22 11:49:01 crc kubenswrapper[5120]: set -o allexport Jan 22 11:49:01 crc kubenswrapper[5120]: source "/env/_master" Jan 22 11:49:01 crc kubenswrapper[5120]: set +o allexport Jan 22 11:49:01 crc kubenswrapper[5120]: fi Jan 22 11:49:01 crc kubenswrapper[5120]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Jan 22 11:49:01 crc kubenswrapper[5120]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Jan 22 11:49:01 crc kubenswrapper[5120]: ho_enable="--enable-hybrid-overlay" Jan 22 11:49:01 crc kubenswrapper[5120]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Jan 22 11:49:01 crc kubenswrapper[5120]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Jan 22 11:49:01 crc kubenswrapper[5120]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Jan 22 11:49:01 crc kubenswrapper[5120]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 22 11:49:01 crc kubenswrapper[5120]: --webhook-cert-dir="/etc/webhook-cert" \ Jan 22 11:49:01 crc kubenswrapper[5120]: --webhook-host=127.0.0.1 \ Jan 22 11:49:01 crc kubenswrapper[5120]: --webhook-port=9743 \ Jan 22 11:49:01 crc kubenswrapper[5120]: ${ho_enable} \ Jan 22 11:49:01 crc kubenswrapper[5120]: --enable-interconnect \ Jan 22 11:49:01 crc kubenswrapper[5120]: --disable-approver \ Jan 22 11:49:01 crc kubenswrapper[5120]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Jan 22 11:49:01 crc kubenswrapper[5120]: --wait-for-kubernetes-api=200s \ Jan 22 11:49:01 crc kubenswrapper[5120]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Jan 22 11:49:01 crc kubenswrapper[5120]: --loglevel="${LOGLEVEL}" Jan 22 11:49:01 crc kubenswrapper[5120]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 22 11:49:01 crc kubenswrapper[5120]: > logger="UnhandledError" Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.911660 5120 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.911797 5120 scope.go:117] "RemoveContainer" containerID="99b634350c36056ac94a43bb1050fb0a41c21441966a10fdfe3aeae30cfd0c2f" Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.912200 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.912457 5120 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 22 11:49:01 crc kubenswrapper[5120]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Jan 22 11:49:01 crc kubenswrapper[5120]: set -o allexport Jan 22 11:49:01 crc kubenswrapper[5120]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Jan 22 11:49:01 crc kubenswrapper[5120]: source /etc/kubernetes/apiserver-url.env Jan 22 11:49:01 crc kubenswrapper[5120]: else Jan 22 11:49:01 crc kubenswrapper[5120]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Jan 22 11:49:01 crc kubenswrapper[5120]: exit 1 Jan 22 11:49:01 crc kubenswrapper[5120]: fi Jan 22 11:49:01 crc kubenswrapper[5120]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Jan 22 11:49:01 crc kubenswrapper[5120]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 22 11:49:01 crc kubenswrapper[5120]: > logger="UnhandledError" Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.912870 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.913518 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.914632 5120 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 22 11:49:01 crc kubenswrapper[5120]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 22 11:49:01 crc kubenswrapper[5120]: if [[ -f "/env/_master" ]]; then Jan 22 11:49:01 crc kubenswrapper[5120]: set -o allexport Jan 22 11:49:01 crc kubenswrapper[5120]: source "/env/_master" Jan 22 11:49:01 crc kubenswrapper[5120]: set +o allexport Jan 22 11:49:01 crc kubenswrapper[5120]: fi Jan 22 11:49:01 crc kubenswrapper[5120]: Jan 22 11:49:01 crc kubenswrapper[5120]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Jan 22 11:49:01 crc kubenswrapper[5120]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 22 11:49:01 crc kubenswrapper[5120]: --disable-webhook \ Jan 22 11:49:01 crc kubenswrapper[5120]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Jan 22 11:49:01 crc kubenswrapper[5120]: --loglevel="${LOGLEVEL}" Jan 22 11:49:01 crc kubenswrapper[5120]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 22 11:49:01 crc kubenswrapper[5120]: > logger="UnhandledError" Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.916280 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.920521 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdb50da0-eb06-4959-b8da-70919924f77e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lt4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lt4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-xzh79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.923118 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.929068 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-multus-cni-dir\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.929127 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-run-netns\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.929222 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-run-netns\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.929260 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-multus-cni-dir\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.929331 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/97df0621-ddba-4462-8134-59bc671c7351-system-cni-dir\") pod \"multus-additional-cni-plugins-rg989\" (UID: \"97df0621-ddba-4462-8134-59bc671c7351\") " pod="openshift-multus/multus-additional-cni-plugins-rg989" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.929383 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/97df0621-ddba-4462-8134-59bc671c7351-cni-binary-copy\") pod \"multus-additional-cni-plugins-rg989\" (UID: \"97df0621-ddba-4462-8134-59bc671c7351\") " pod="openshift-multus/multus-additional-cni-plugins-rg989" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.929391 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/97df0621-ddba-4462-8134-59bc671c7351-system-cni-dir\") pod \"multus-additional-cni-plugins-rg989\" (UID: \"97df0621-ddba-4462-8134-59bc671c7351\") " pod="openshift-multus/multus-additional-cni-plugins-rg989" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.929405 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/97df0621-ddba-4462-8134-59bc671c7351-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rg989\" (UID: \"97df0621-ddba-4462-8134-59bc671c7351\") " pod="openshift-multus/multus-additional-cni-plugins-rg989" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.929428 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wdqkj\" (UniqueName: \"kubernetes.io/projected/f9f485fd-0793-40a0-abf8-12fd3b612c87-kube-api-access-wdqkj\") pod \"node-ca-tf9nb\" (UID: \"f9f485fd-0793-40a0-abf8-12fd3b612c87\") " pod="openshift-image-registry/node-ca-tf9nb" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.929452 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cdb50da0-eb06-4959-b8da-70919924f77e-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-xzh79\" (UID: \"cdb50da0-eb06-4959-b8da-70919924f77e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.929516 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-multus-socket-dir-parent\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.929561 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-host-var-lib-cni-multus\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.929587 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-system-cni-dir\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.929607 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-host-var-lib-kubelet\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.929625 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-run-ovn\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.929673 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-host-var-lib-cni-multus\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.929686 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-run-ovn\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.929678 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/97df0621-ddba-4462-8134-59bc671c7351-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rg989\" (UID: \"97df0621-ddba-4462-8134-59bc671c7351\") " pod="openshift-multus/multus-additional-cni-plugins-rg989" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.929714 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-cni-bin\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.929750 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-cni-bin\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.929766 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-system-cni-dir\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.929785 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-host-var-lib-kubelet\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.929787 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-scbgq\" (UniqueName: \"kubernetes.io/projected/90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9-kube-api-access-scbgq\") pod \"machine-config-daemon-dq269\" (UID: \"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9\") " pod="openshift-machine-config-operator/machine-config-daemon-dq269" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.929885 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kndcw\" (UniqueName: \"kubernetes.io/projected/dababdca-8afb-452f-865f-54de3aec21d9-kube-api-access-kndcw\") pod \"network-metrics-daemon-ldwx4\" (UID: \"dababdca-8afb-452f-865f-54de3aec21d9\") " pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.929923 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-kubelet\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.929947 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-cni-netd\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.929999 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.930027 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-ovn-node-metrics-cert\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.930055 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dababdca-8afb-452f-865f-54de3aec21d9-metrics-certs\") pod \"network-metrics-daemon-ldwx4\" (UID: \"dababdca-8afb-452f-865f-54de3aec21d9\") " pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.930078 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-cnibin\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.930110 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-hostroot\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.930116 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.930175 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zz7fj\" (UniqueName: \"kubernetes.io/projected/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-kube-api-access-zz7fj\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.930203 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-kubelet\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.930208 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-run-openvswitch\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.930243 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-cnibin\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.930265 5120 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.930279 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-var-lib-openvswitch\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.930311 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f9f485fd-0793-40a0-abf8-12fd3b612c87-serviceca\") pod \"node-ca-tf9nb\" (UID: \"f9f485fd-0793-40a0-abf8-12fd3b612c87\") " pod="openshift-image-registry/node-ca-tf9nb" Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.930330 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dababdca-8afb-452f-865f-54de3aec21d9-metrics-certs podName:dababdca-8afb-452f-865f-54de3aec21d9 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:02.430311644 +0000 UTC m=+77.174259985 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dababdca-8afb-452f-865f-54de3aec21d9-metrics-certs") pod "network-metrics-daemon-ldwx4" (UID: "dababdca-8afb-452f-865f-54de3aec21d9") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.930432 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cdb50da0-eb06-4959-b8da-70919924f77e-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-xzh79\" (UID: \"cdb50da0-eb06-4959-b8da-70919924f77e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.930432 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-hostroot\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.930499 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-run-openvswitch\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.930527 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-var-lib-openvswitch\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.930548 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-host-run-netns\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.930570 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-slash\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.930586 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-ovnkube-config\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.930601 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f9f485fd-0793-40a0-abf8-12fd3b612c87-host\") pod \"node-ca-tf9nb\" (UID: \"f9f485fd-0793-40a0-abf8-12fd3b612c87\") " pod="openshift-image-registry/node-ca-tf9nb" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.930618 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cdb50da0-eb06-4959-b8da-70919924f77e-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-xzh79\" (UID: \"cdb50da0-eb06-4959-b8da-70919924f77e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.930660 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-systemd-units\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.930665 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-slash\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.930679 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-host-run-netns\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.930728 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-systemd-units\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.930748 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-host-var-lib-cni-bin\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.930770 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-log-socket\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.930789 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-ovnkube-script-lib\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.930807 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9-rootfs\") pod \"machine-config-daemon-dq269\" (UID: \"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9\") " pod="openshift-machine-config-operator/machine-config-daemon-dq269" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.930831 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-host-run-k8s-cni-cncf-io\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.930847 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-multus-conf-dir\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.930857 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-host-var-lib-cni-bin\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.930897 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-host-run-k8s-cni-cncf-io\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.930916 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/97df0621-ddba-4462-8134-59bc671c7351-cnibin\") pod \"multus-additional-cni-plugins-rg989\" (UID: \"97df0621-ddba-4462-8134-59bc671c7351\") " pod="openshift-multus/multus-additional-cni-plugins-rg989" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.930931 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9-rootfs\") pod \"machine-config-daemon-dq269\" (UID: \"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9\") " pod="openshift-machine-config-operator/machine-config-daemon-dq269" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.930939 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-multus-conf-dir\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.930864 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/97df0621-ddba-4462-8134-59bc671c7351-cnibin\") pod \"multus-additional-cni-plugins-rg989\" (UID: \"97df0621-ddba-4462-8134-59bc671c7351\") " pod="openshift-multus/multus-additional-cni-plugins-rg989" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.930993 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/97df0621-ddba-4462-8134-59bc671c7351-os-release\") pod \"multus-additional-cni-plugins-rg989\" (UID: \"97df0621-ddba-4462-8134-59bc671c7351\") " pod="openshift-multus/multus-additional-cni-plugins-rg989" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.931012 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-log-socket\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.931014 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/97df0621-ddba-4462-8134-59bc671c7351-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-rg989\" (UID: \"97df0621-ddba-4462-8134-59bc671c7351\") " pod="openshift-multus/multus-additional-cni-plugins-rg989" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.931058 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-etc-openvswitch\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.931080 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-host-run-multus-certs\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.931096 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-node-log\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.931112 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zdzrm\" (UniqueName: \"kubernetes.io/projected/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-kube-api-access-zdzrm\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.931128 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9-mcd-auth-proxy-config\") pod \"machine-config-daemon-dq269\" (UID: \"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9\") " pod="openshift-machine-config-operator/machine-config-daemon-dq269" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.931144 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9lt4m\" (UniqueName: \"kubernetes.io/projected/cdb50da0-eb06-4959-b8da-70919924f77e-kube-api-access-9lt4m\") pod \"ovnkube-control-plane-57b78d8988-xzh79\" (UID: \"cdb50da0-eb06-4959-b8da-70919924f77e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.931166 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-os-release\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.931180 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-node-log\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.931363 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cdb50da0-eb06-4959-b8da-70919924f77e-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-xzh79\" (UID: \"cdb50da0-eb06-4959-b8da-70919924f77e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.931556 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-ovnkube-script-lib\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.931652 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/97df0621-ddba-4462-8134-59bc671c7351-os-release\") pod \"multus-additional-cni-plugins-rg989\" (UID: \"97df0621-ddba-4462-8134-59bc671c7351\") " pod="openshift-multus/multus-additional-cni-plugins-rg989" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.931687 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/97df0621-ddba-4462-8134-59bc671c7351-cni-binary-copy\") pod \"multus-additional-cni-plugins-rg989\" (UID: \"97df0621-ddba-4462-8134-59bc671c7351\") " pod="openshift-multus/multus-additional-cni-plugins-rg989" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.931185 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cs4xp\" (UniqueName: \"kubernetes.io/projected/97df0621-ddba-4462-8134-59bc671c7351-kube-api-access-cs4xp\") pod \"multus-additional-cni-plugins-rg989\" (UID: \"97df0621-ddba-4462-8134-59bc671c7351\") " pod="openshift-multus/multus-additional-cni-plugins-rg989" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.931693 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-etc-openvswitch\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.931732 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-run-systemd\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.931727 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-host-run-multus-certs\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.931698 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-os-release\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.931770 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-run-systemd\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.931810 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-multus-socket-dir-parent\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.931856 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-run-ovn-kubernetes\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.931773 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-cni-netd\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.931825 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-run-ovn-kubernetes\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.931949 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/97df0621-ddba-4462-8134-59bc671c7351-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-rg989\" (UID: \"97df0621-ddba-4462-8134-59bc671c7351\") " pod="openshift-multus/multus-additional-cni-plugins-rg989" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.932030 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9-proxy-tls\") pod \"machine-config-daemon-dq269\" (UID: \"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9\") " pod="openshift-machine-config-operator/machine-config-daemon-dq269" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.931979 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9-mcd-auth-proxy-config\") pod \"machine-config-daemon-dq269\" (UID: \"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9\") " pod="openshift-machine-config-operator/machine-config-daemon-dq269" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.932101 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cdb50da0-eb06-4959-b8da-70919924f77e-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-xzh79\" (UID: \"cdb50da0-eb06-4959-b8da-70919924f77e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.932167 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-ovnkube-config\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.930785 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f9f485fd-0793-40a0-abf8-12fd3b612c87-host\") pod \"node-ca-tf9nb\" (UID: \"f9f485fd-0793-40a0-abf8-12fd3b612c87\") " pod="openshift-image-registry/node-ca-tf9nb" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.933239 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.933755 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-ovn-node-metrics-cert\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934042 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-cni-binary-copy\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934083 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-multus-daemon-config\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934121 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-etc-kubernetes\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934165 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/97df0621-ddba-4462-8134-59bc671c7351-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rg989\" (UID: \"97df0621-ddba-4462-8134-59bc671c7351\") " pod="openshift-multus/multus-additional-cni-plugins-rg989" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934178 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f9f485fd-0793-40a0-abf8-12fd3b612c87-serviceca\") pod \"node-ca-tf9nb\" (UID: \"f9f485fd-0793-40a0-abf8-12fd3b612c87\") " pod="openshift-image-registry/node-ca-tf9nb" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934190 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-env-overrides\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934268 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-etc-kubernetes\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934338 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934359 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934380 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934394 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934407 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934421 5120 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934436 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934452 5120 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934465 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934477 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934489 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934502 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934514 5120 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934529 5120 reconciler_common.go:299] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934542 5120 reconciler_common.go:299] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934558 5120 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934571 5120 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934584 5120 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934595 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934605 5120 reconciler_common.go:299] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934615 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934627 5120 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934641 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934655 5120 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934667 5120 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934680 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934693 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934706 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934719 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934730 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934738 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934748 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934759 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934768 5120 reconciler_common.go:299] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934776 5120 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934786 5120 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934796 5120 reconciler_common.go:299] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934805 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934815 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934824 5120 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934832 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934841 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934850 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934860 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934868 5120 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934878 5120 reconciler_common.go:299] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934889 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934898 5120 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.935285 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/97df0621-ddba-4462-8134-59bc671c7351-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rg989\" (UID: \"97df0621-ddba-4462-8134-59bc671c7351\") " pod="openshift-multus/multus-additional-cni-plugins-rg989" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.935349 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-cni-binary-copy\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.935488 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-multus-daemon-config\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.935906 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cdb50da0-eb06-4959-b8da-70919924f77e-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-xzh79\" (UID: \"cdb50da0-eb06-4959-b8da-70919924f77e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.936242 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-env-overrides\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.937943 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9-proxy-tls\") pod \"machine-config-daemon-dq269\" (UID: \"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9\") " pod="openshift-machine-config-operator/machine-config-daemon-dq269" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.944021 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-wrdkl" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.948837 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdqkj\" (UniqueName: \"kubernetes.io/projected/f9f485fd-0793-40a0-abf8-12fd3b612c87-kube-api-access-wdqkj\") pod \"node-ca-tf9nb\" (UID: \"f9f485fd-0793-40a0-abf8-12fd3b612c87\") " pod="openshift-image-registry/node-ca-tf9nb" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.949175 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-scbgq\" (UniqueName: \"kubernetes.io/projected/90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9-kube-api-access-scbgq\") pod \"machine-config-daemon-dq269\" (UID: \"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9\") " pod="openshift-machine-config-operator/machine-config-daemon-dq269" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.950483 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cs4xp\" (UniqueName: \"kubernetes.io/projected/97df0621-ddba-4462-8134-59bc671c7351-kube-api-access-cs4xp\") pod \"multus-additional-cni-plugins-rg989\" (UID: \"97df0621-ddba-4462-8134-59bc671c7351\") " pod="openshift-multus/multus-additional-cni-plugins-rg989" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.953340 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.953428 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdzrm\" (UniqueName: \"kubernetes.io/projected/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-kube-api-access-zdzrm\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.953942 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zz7fj\" (UniqueName: \"kubernetes.io/projected/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-kube-api-access-zz7fj\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.954034 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lt4m\" (UniqueName: \"kubernetes.io/projected/cdb50da0-eb06-4959-b8da-70919924f77e-kube-api-access-9lt4m\") pod \"ovnkube-control-plane-57b78d8988-xzh79\" (UID: \"cdb50da0-eb06-4959-b8da-70919924f77e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.954184 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-rg989" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.957358 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kndcw\" (UniqueName: \"kubernetes.io/projected/dababdca-8afb-452f-865f-54de3aec21d9-kube-api-access-kndcw\") pod \"network-metrics-daemon-ldwx4\" (UID: \"dababdca-8afb-452f-865f-54de3aec21d9\") " pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:49:01 crc kubenswrapper[5120]: W0122 11:49:01.957605 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeaa5719f_fed8_44ac_a759_d2c22d9a2a7f.slice/crio-cead94ca34f70bd435c09fd64bff64731b52e59517244bfd77f36dc376930de5 WatchSource:0}: Error finding container cead94ca34f70bd435c09fd64bff64731b52e59517244bfd77f36dc376930de5: Status 404 returned error can't find the container with id cead94ca34f70bd435c09fd64bff64731b52e59517244bfd77f36dc376930de5 Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.957716 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.957774 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.957786 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.957803 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.957814 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:01Z","lastTransitionTime":"2026-01-22T11:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.964304 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-tf9nb" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.964659 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-wrdkl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eaa5719f-fed8-44ac-a759-d2c22d9a2a7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgcrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wrdkl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.965805 5120 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 22 11:49:01 crc kubenswrapper[5120]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Jan 22 11:49:01 crc kubenswrapper[5120]: set -uo pipefail Jan 22 11:49:01 crc kubenswrapper[5120]: Jan 22 11:49:01 crc kubenswrapper[5120]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Jan 22 11:49:01 crc kubenswrapper[5120]: Jan 22 11:49:01 crc kubenswrapper[5120]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Jan 22 11:49:01 crc kubenswrapper[5120]: HOSTS_FILE="/etc/hosts" Jan 22 11:49:01 crc kubenswrapper[5120]: TEMP_FILE="/tmp/hosts.tmp" Jan 22 11:49:01 crc kubenswrapper[5120]: Jan 22 11:49:01 crc kubenswrapper[5120]: IFS=', ' read -r -a services <<< "${SERVICES}" Jan 22 11:49:01 crc kubenswrapper[5120]: Jan 22 11:49:01 crc kubenswrapper[5120]: # Make a temporary file with the old hosts file's attributes. Jan 22 11:49:01 crc kubenswrapper[5120]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Jan 22 11:49:01 crc kubenswrapper[5120]: echo "Failed to preserve hosts file. Exiting." Jan 22 11:49:01 crc kubenswrapper[5120]: exit 1 Jan 22 11:49:01 crc kubenswrapper[5120]: fi Jan 22 11:49:01 crc kubenswrapper[5120]: Jan 22 11:49:01 crc kubenswrapper[5120]: while true; do Jan 22 11:49:01 crc kubenswrapper[5120]: declare -A svc_ips Jan 22 11:49:01 crc kubenswrapper[5120]: for svc in "${services[@]}"; do Jan 22 11:49:01 crc kubenswrapper[5120]: # Fetch service IP from cluster dns if present. We make several tries Jan 22 11:49:01 crc kubenswrapper[5120]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Jan 22 11:49:01 crc kubenswrapper[5120]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Jan 22 11:49:01 crc kubenswrapper[5120]: # support UDP loadbalancers and require reaching DNS through TCP. Jan 22 11:49:01 crc kubenswrapper[5120]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 22 11:49:01 crc kubenswrapper[5120]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 22 11:49:01 crc kubenswrapper[5120]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 22 11:49:01 crc kubenswrapper[5120]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Jan 22 11:49:01 crc kubenswrapper[5120]: for i in ${!cmds[*]} Jan 22 11:49:01 crc kubenswrapper[5120]: do Jan 22 11:49:01 crc kubenswrapper[5120]: ips=($(eval "${cmds[i]}")) Jan 22 11:49:01 crc kubenswrapper[5120]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Jan 22 11:49:01 crc kubenswrapper[5120]: svc_ips["${svc}"]="${ips[@]}" Jan 22 11:49:01 crc kubenswrapper[5120]: break Jan 22 11:49:01 crc kubenswrapper[5120]: fi Jan 22 11:49:01 crc kubenswrapper[5120]: done Jan 22 11:49:01 crc kubenswrapper[5120]: done Jan 22 11:49:01 crc kubenswrapper[5120]: Jan 22 11:49:01 crc kubenswrapper[5120]: # Update /etc/hosts only if we get valid service IPs Jan 22 11:49:01 crc kubenswrapper[5120]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Jan 22 11:49:01 crc kubenswrapper[5120]: # Stale entries could exist in /etc/hosts if the service is deleted Jan 22 11:49:01 crc kubenswrapper[5120]: if [[ -n "${svc_ips[*]-}" ]]; then Jan 22 11:49:01 crc kubenswrapper[5120]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Jan 22 11:49:01 crc kubenswrapper[5120]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Jan 22 11:49:01 crc kubenswrapper[5120]: # Only continue rebuilding the hosts entries if its original content is preserved Jan 22 11:49:01 crc kubenswrapper[5120]: sleep 60 & wait Jan 22 11:49:01 crc kubenswrapper[5120]: continue Jan 22 11:49:01 crc kubenswrapper[5120]: fi Jan 22 11:49:01 crc kubenswrapper[5120]: Jan 22 11:49:01 crc kubenswrapper[5120]: # Append resolver entries for services Jan 22 11:49:01 crc kubenswrapper[5120]: rc=0 Jan 22 11:49:01 crc kubenswrapper[5120]: for svc in "${!svc_ips[@]}"; do Jan 22 11:49:01 crc kubenswrapper[5120]: for ip in ${svc_ips[${svc}]}; do Jan 22 11:49:01 crc kubenswrapper[5120]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Jan 22 11:49:01 crc kubenswrapper[5120]: done Jan 22 11:49:01 crc kubenswrapper[5120]: done Jan 22 11:49:01 crc kubenswrapper[5120]: if [[ $rc -ne 0 ]]; then Jan 22 11:49:01 crc kubenswrapper[5120]: sleep 60 & wait Jan 22 11:49:01 crc kubenswrapper[5120]: continue Jan 22 11:49:01 crc kubenswrapper[5120]: fi Jan 22 11:49:01 crc kubenswrapper[5120]: Jan 22 11:49:01 crc kubenswrapper[5120]: Jan 22 11:49:01 crc kubenswrapper[5120]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Jan 22 11:49:01 crc kubenswrapper[5120]: # Replace /etc/hosts with our modified version if needed Jan 22 11:49:01 crc kubenswrapper[5120]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Jan 22 11:49:01 crc kubenswrapper[5120]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Jan 22 11:49:01 crc kubenswrapper[5120]: fi Jan 22 11:49:01 crc kubenswrapper[5120]: sleep 60 & wait Jan 22 11:49:01 crc kubenswrapper[5120]: unset svc_ips Jan 22 11:49:01 crc kubenswrapper[5120]: done Jan 22 11:49:01 crc kubenswrapper[5120]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dgcrk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-wrdkl_openshift-dns(eaa5719f-fed8-44ac-a759-d2c22d9a2a7f): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 22 11:49:01 crc kubenswrapper[5120]: > logger="UnhandledError" Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.966945 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-wrdkl" podUID="eaa5719f-fed8-44ac-a759-d2c22d9a2a7f" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.971594 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-dq269" Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.973822 5120 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cs4xp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-rg989_openshift-multus(97df0621-ddba-4462-8134-59bc671c7351): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.975583 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-rg989" podUID="97df0621-ddba-4462-8134-59bc671c7351" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.978023 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rg989" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97df0621-ddba-4462-8134-59bc671c7351\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rg989\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.978592 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.987332 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: W0122 11:49:01.988162 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9f485fd_0793_40a0_abf8_12fd3b612c87.slice/crio-a6e0c823a1210b5b9380e5060667c155023baf8bda5d5ab1e94bc885f2b1e0bb WatchSource:0}: Error finding container a6e0c823a1210b5b9380e5060667c155023baf8bda5d5ab1e94bc885f2b1e0bb: Status 404 returned error can't find the container with id a6e0c823a1210b5b9380e5060667c155023baf8bda5d5ab1e94bc885f2b1e0bb Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.992749 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-4lzht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz7fj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4lzht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.993596 5120 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 22 11:49:01 crc kubenswrapper[5120]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Jan 22 11:49:01 crc kubenswrapper[5120]: while [ true ]; Jan 22 11:49:01 crc kubenswrapper[5120]: do Jan 22 11:49:01 crc kubenswrapper[5120]: for f in $(ls /tmp/serviceca); do Jan 22 11:49:01 crc kubenswrapper[5120]: echo $f Jan 22 11:49:01 crc kubenswrapper[5120]: ca_file_path="/tmp/serviceca/${f}" Jan 22 11:49:01 crc kubenswrapper[5120]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Jan 22 11:49:01 crc kubenswrapper[5120]: reg_dir_path="/etc/docker/certs.d/${f}" Jan 22 11:49:01 crc kubenswrapper[5120]: if [ -e "${reg_dir_path}" ]; then Jan 22 11:49:01 crc kubenswrapper[5120]: cp -u $ca_file_path $reg_dir_path/ca.crt Jan 22 11:49:01 crc kubenswrapper[5120]: else Jan 22 11:49:01 crc kubenswrapper[5120]: mkdir $reg_dir_path Jan 22 11:49:01 crc kubenswrapper[5120]: cp $ca_file_path $reg_dir_path/ca.crt Jan 22 11:49:01 crc kubenswrapper[5120]: fi Jan 22 11:49:01 crc kubenswrapper[5120]: done Jan 22 11:49:01 crc kubenswrapper[5120]: for d in $(ls /etc/docker/certs.d); do Jan 22 11:49:01 crc kubenswrapper[5120]: echo $d Jan 22 11:49:01 crc kubenswrapper[5120]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Jan 22 11:49:01 crc kubenswrapper[5120]: reg_conf_path="/tmp/serviceca/${dp}" Jan 22 11:49:01 crc kubenswrapper[5120]: if [ ! -e "${reg_conf_path}" ]; then Jan 22 11:49:01 crc kubenswrapper[5120]: rm -rf /etc/docker/certs.d/$d Jan 22 11:49:01 crc kubenswrapper[5120]: fi Jan 22 11:49:01 crc kubenswrapper[5120]: done Jan 22 11:49:01 crc kubenswrapper[5120]: sleep 60 & wait ${!} Jan 22 11:49:01 crc kubenswrapper[5120]: done Jan 22 11:49:01 crc kubenswrapper[5120]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wdqkj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-tf9nb_openshift-image-registry(f9f485fd-0793-40a0-abf8-12fd3b612c87): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 22 11:49:01 crc kubenswrapper[5120]: > logger="UnhandledError" Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.995384 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-tf9nb" podUID="f9f485fd-0793-40a0-abf8-12fd3b612c87" Jan 22 11:49:02 crc kubenswrapper[5120]: W0122 11:49:02.001290 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod90c9e0b1_9c25_48fc_8aef_c587b5d6d8e9.slice/crio-89844fac781a686f2175b05b0f7c607c93448977e06c70e055b15e62df93a488 WatchSource:0}: Error finding container 89844fac781a686f2175b05b0f7c607c93448977e06c70e055b15e62df93a488: Status 404 returned error can't find the container with id 89844fac781a686f2175b05b0f7c607c93448977e06c70e055b15e62df93a488 Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.003785 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79" Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.004110 5120 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-scbgq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 22 11:49:02 crc kubenswrapper[5120]: W0122 11:49:02.005008 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd62bdde_a6c1_42b3_9585_ba64c63cbb51.slice/crio-948f3922f0403f01af9c080b4700105b9cfcfffd97d2155e3cc2c89092d9038d WatchSource:0}: Error finding container 948f3922f0403f01af9c080b4700105b9cfcfffd97d2155e3cc2c89092d9038d: Status 404 returned error can't find the container with id 948f3922f0403f01af9c080b4700105b9cfcfffd97d2155e3cc2c89092d9038d Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.008597 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc6361ac-72d0-485c-938e-c58010f57d78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://4b2fc2ec264e1a2f47ef48ae3682ece70e9bcb0c27191badb3dbb25d763d6ed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://d8530587a7dacf7f1e414d966e228d915e25d07d268990a0cbd418ca534f37e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7d6d0b4ca0fcc7c60a642256079a5ccee5482c56dd372189b46a95401451fa45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d115df90471eae10a65aefb390195da3593e903d0ad1a730847db2d29a63cc7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.008807 5120 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 22 11:49:02 crc kubenswrapper[5120]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Jan 22 11:49:02 crc kubenswrapper[5120]: apiVersion: v1 Jan 22 11:49:02 crc kubenswrapper[5120]: clusters: Jan 22 11:49:02 crc kubenswrapper[5120]: - cluster: Jan 22 11:49:02 crc kubenswrapper[5120]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Jan 22 11:49:02 crc kubenswrapper[5120]: server: https://api-int.crc.testing:6443 Jan 22 11:49:02 crc kubenswrapper[5120]: name: default-cluster Jan 22 11:49:02 crc kubenswrapper[5120]: contexts: Jan 22 11:49:02 crc kubenswrapper[5120]: - context: Jan 22 11:49:02 crc kubenswrapper[5120]: cluster: default-cluster Jan 22 11:49:02 crc kubenswrapper[5120]: namespace: default Jan 22 11:49:02 crc kubenswrapper[5120]: user: default-auth Jan 22 11:49:02 crc kubenswrapper[5120]: name: default-context Jan 22 11:49:02 crc kubenswrapper[5120]: current-context: default-context Jan 22 11:49:02 crc kubenswrapper[5120]: kind: Config Jan 22 11:49:02 crc kubenswrapper[5120]: preferences: {} Jan 22 11:49:02 crc kubenswrapper[5120]: users: Jan 22 11:49:02 crc kubenswrapper[5120]: - name: default-auth Jan 22 11:49:02 crc kubenswrapper[5120]: user: Jan 22 11:49:02 crc kubenswrapper[5120]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Jan 22 11:49:02 crc kubenswrapper[5120]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Jan 22 11:49:02 crc kubenswrapper[5120]: EOF Jan 22 11:49:02 crc kubenswrapper[5120]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zdzrm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-2mf7v_openshift-ovn-kubernetes(dd62bdde-a6c1-42b3-9585-ba64c63cbb51): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 22 11:49:02 crc kubenswrapper[5120]: > logger="UnhandledError" Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.009169 5120 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-scbgq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.010330 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" podUID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.010332 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 11:49:02 crc kubenswrapper[5120]: W0122 11:49:02.016147 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67eb0b85_4fb2_4c18_a78b_e2eeaa4d2087.slice/crio-082b622a176aa05319a4fc66bfbdcdbb3ba81ad686d896f0acc0ae2f995c8919 WatchSource:0}: Error finding container 082b622a176aa05319a4fc66bfbdcdbb3ba81ad686d896f0acc0ae2f995c8919: Status 404 returned error can't find the container with id 082b622a176aa05319a4fc66bfbdcdbb3ba81ad686d896f0acc0ae2f995c8919 Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.018982 5120 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 22 11:49:02 crc kubenswrapper[5120]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Jan 22 11:49:02 crc kubenswrapper[5120]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Jan 22 11:49:02 crc kubenswrapper[5120]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zz7fj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-4lzht_openshift-multus(67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 22 11:49:02 crc kubenswrapper[5120]: > logger="UnhandledError" Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.020840 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-4lzht" podUID="67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.021080 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:02 crc kubenswrapper[5120]: W0122 11:49:02.024005 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcdb50da0_eb06_4959_b8da_70919924f77e.slice/crio-20963fbe51218d226586341531cebabcba165784d34f9b709674547be7d8df72 WatchSource:0}: Error finding container 20963fbe51218d226586341531cebabcba165784d34f9b709674547be7d8df72: Status 404 returned error can't find the container with id 20963fbe51218d226586341531cebabcba165784d34f9b709674547be7d8df72 Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.026479 5120 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 22 11:49:02 crc kubenswrapper[5120]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Jan 22 11:49:02 crc kubenswrapper[5120]: set -euo pipefail Jan 22 11:49:02 crc kubenswrapper[5120]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Jan 22 11:49:02 crc kubenswrapper[5120]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Jan 22 11:49:02 crc kubenswrapper[5120]: # As the secret mount is optional we must wait for the files to be present. Jan 22 11:49:02 crc kubenswrapper[5120]: # The service is created in monitor.yaml and this is created in sdn.yaml. Jan 22 11:49:02 crc kubenswrapper[5120]: TS=$(date +%s) Jan 22 11:49:02 crc kubenswrapper[5120]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Jan 22 11:49:02 crc kubenswrapper[5120]: HAS_LOGGED_INFO=0 Jan 22 11:49:02 crc kubenswrapper[5120]: Jan 22 11:49:02 crc kubenswrapper[5120]: log_missing_certs(){ Jan 22 11:49:02 crc kubenswrapper[5120]: CUR_TS=$(date +%s) Jan 22 11:49:02 crc kubenswrapper[5120]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Jan 22 11:49:02 crc kubenswrapper[5120]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Jan 22 11:49:02 crc kubenswrapper[5120]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Jan 22 11:49:02 crc kubenswrapper[5120]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Jan 22 11:49:02 crc kubenswrapper[5120]: HAS_LOGGED_INFO=1 Jan 22 11:49:02 crc kubenswrapper[5120]: fi Jan 22 11:49:02 crc kubenswrapper[5120]: } Jan 22 11:49:02 crc kubenswrapper[5120]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Jan 22 11:49:02 crc kubenswrapper[5120]: log_missing_certs Jan 22 11:49:02 crc kubenswrapper[5120]: sleep 5 Jan 22 11:49:02 crc kubenswrapper[5120]: done Jan 22 11:49:02 crc kubenswrapper[5120]: Jan 22 11:49:02 crc kubenswrapper[5120]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Jan 22 11:49:02 crc kubenswrapper[5120]: exec /usr/bin/kube-rbac-proxy \ Jan 22 11:49:02 crc kubenswrapper[5120]: --logtostderr \ Jan 22 11:49:02 crc kubenswrapper[5120]: --secure-listen-address=:9108 \ Jan 22 11:49:02 crc kubenswrapper[5120]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Jan 22 11:49:02 crc kubenswrapper[5120]: --upstream=http://127.0.0.1:29108/ \ Jan 22 11:49:02 crc kubenswrapper[5120]: --tls-private-key-file=${TLS_PK} \ Jan 22 11:49:02 crc kubenswrapper[5120]: --tls-cert-file=${TLS_CERT} Jan 22 11:49:02 crc kubenswrapper[5120]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9lt4m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-xzh79_openshift-ovn-kubernetes(cdb50da0-eb06-4959-b8da-70919924f77e): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 22 11:49:02 crc kubenswrapper[5120]: > logger="UnhandledError" Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.029157 5120 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 22 11:49:02 crc kubenswrapper[5120]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 22 11:49:02 crc kubenswrapper[5120]: if [[ -f "/env/_master" ]]; then Jan 22 11:49:02 crc kubenswrapper[5120]: set -o allexport Jan 22 11:49:02 crc kubenswrapper[5120]: source "/env/_master" Jan 22 11:49:02 crc kubenswrapper[5120]: set +o allexport Jan 22 11:49:02 crc kubenswrapper[5120]: fi Jan 22 11:49:02 crc kubenswrapper[5120]: Jan 22 11:49:02 crc kubenswrapper[5120]: ovn_v4_join_subnet_opt= Jan 22 11:49:02 crc kubenswrapper[5120]: if [[ "" != "" ]]; then Jan 22 11:49:02 crc kubenswrapper[5120]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Jan 22 11:49:02 crc kubenswrapper[5120]: fi Jan 22 11:49:02 crc kubenswrapper[5120]: ovn_v6_join_subnet_opt= Jan 22 11:49:02 crc kubenswrapper[5120]: if [[ "" != "" ]]; then Jan 22 11:49:02 crc kubenswrapper[5120]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Jan 22 11:49:02 crc kubenswrapper[5120]: fi Jan 22 11:49:02 crc kubenswrapper[5120]: Jan 22 11:49:02 crc kubenswrapper[5120]: ovn_v4_transit_switch_subnet_opt= Jan 22 11:49:02 crc kubenswrapper[5120]: if [[ "" != "" ]]; then Jan 22 11:49:02 crc kubenswrapper[5120]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Jan 22 11:49:02 crc kubenswrapper[5120]: fi Jan 22 11:49:02 crc kubenswrapper[5120]: ovn_v6_transit_switch_subnet_opt= Jan 22 11:49:02 crc kubenswrapper[5120]: if [[ "" != "" ]]; then Jan 22 11:49:02 crc kubenswrapper[5120]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Jan 22 11:49:02 crc kubenswrapper[5120]: fi Jan 22 11:49:02 crc kubenswrapper[5120]: Jan 22 11:49:02 crc kubenswrapper[5120]: dns_name_resolver_enabled_flag= Jan 22 11:49:02 crc kubenswrapper[5120]: if [[ "false" == "true" ]]; then Jan 22 11:49:02 crc kubenswrapper[5120]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Jan 22 11:49:02 crc kubenswrapper[5120]: fi Jan 22 11:49:02 crc kubenswrapper[5120]: Jan 22 11:49:02 crc kubenswrapper[5120]: persistent_ips_enabled_flag="--enable-persistent-ips" Jan 22 11:49:02 crc kubenswrapper[5120]: Jan 22 11:49:02 crc kubenswrapper[5120]: # This is needed so that converting clusters from GA to TP Jan 22 11:49:02 crc kubenswrapper[5120]: # will rollout control plane pods as well Jan 22 11:49:02 crc kubenswrapper[5120]: network_segmentation_enabled_flag= Jan 22 11:49:02 crc kubenswrapper[5120]: multi_network_enabled_flag= Jan 22 11:49:02 crc kubenswrapper[5120]: if [[ "true" == "true" ]]; then Jan 22 11:49:02 crc kubenswrapper[5120]: multi_network_enabled_flag="--enable-multi-network" Jan 22 11:49:02 crc kubenswrapper[5120]: fi Jan 22 11:49:02 crc kubenswrapper[5120]: if [[ "true" == "true" ]]; then Jan 22 11:49:02 crc kubenswrapper[5120]: if [[ "true" != "true" ]]; then Jan 22 11:49:02 crc kubenswrapper[5120]: multi_network_enabled_flag="--enable-multi-network" Jan 22 11:49:02 crc kubenswrapper[5120]: fi Jan 22 11:49:02 crc kubenswrapper[5120]: network_segmentation_enabled_flag="--enable-network-segmentation" Jan 22 11:49:02 crc kubenswrapper[5120]: fi Jan 22 11:49:02 crc kubenswrapper[5120]: Jan 22 11:49:02 crc kubenswrapper[5120]: route_advertisements_enable_flag= Jan 22 11:49:02 crc kubenswrapper[5120]: if [[ "false" == "true" ]]; then Jan 22 11:49:02 crc kubenswrapper[5120]: route_advertisements_enable_flag="--enable-route-advertisements" Jan 22 11:49:02 crc kubenswrapper[5120]: fi Jan 22 11:49:02 crc kubenswrapper[5120]: Jan 22 11:49:02 crc kubenswrapper[5120]: preconfigured_udn_addresses_enable_flag= Jan 22 11:49:02 crc kubenswrapper[5120]: if [[ "false" == "true" ]]; then Jan 22 11:49:02 crc kubenswrapper[5120]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Jan 22 11:49:02 crc kubenswrapper[5120]: fi Jan 22 11:49:02 crc kubenswrapper[5120]: Jan 22 11:49:02 crc kubenswrapper[5120]: # Enable multi-network policy if configured (control-plane always full mode) Jan 22 11:49:02 crc kubenswrapper[5120]: multi_network_policy_enabled_flag= Jan 22 11:49:02 crc kubenswrapper[5120]: if [[ "false" == "true" ]]; then Jan 22 11:49:02 crc kubenswrapper[5120]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Jan 22 11:49:02 crc kubenswrapper[5120]: fi Jan 22 11:49:02 crc kubenswrapper[5120]: Jan 22 11:49:02 crc kubenswrapper[5120]: # Enable admin network policy if configured (control-plane always full mode) Jan 22 11:49:02 crc kubenswrapper[5120]: admin_network_policy_enabled_flag= Jan 22 11:49:02 crc kubenswrapper[5120]: if [[ "true" == "true" ]]; then Jan 22 11:49:02 crc kubenswrapper[5120]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Jan 22 11:49:02 crc kubenswrapper[5120]: fi Jan 22 11:49:02 crc kubenswrapper[5120]: Jan 22 11:49:02 crc kubenswrapper[5120]: if [ "shared" == "shared" ]; then Jan 22 11:49:02 crc kubenswrapper[5120]: gateway_mode_flags="--gateway-mode shared" Jan 22 11:49:02 crc kubenswrapper[5120]: elif [ "shared" == "local" ]; then Jan 22 11:49:02 crc kubenswrapper[5120]: gateway_mode_flags="--gateway-mode local" Jan 22 11:49:02 crc kubenswrapper[5120]: else Jan 22 11:49:02 crc kubenswrapper[5120]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Jan 22 11:49:02 crc kubenswrapper[5120]: exit 1 Jan 22 11:49:02 crc kubenswrapper[5120]: fi Jan 22 11:49:02 crc kubenswrapper[5120]: Jan 22 11:49:02 crc kubenswrapper[5120]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Jan 22 11:49:02 crc kubenswrapper[5120]: exec /usr/bin/ovnkube \ Jan 22 11:49:02 crc kubenswrapper[5120]: --enable-interconnect \ Jan 22 11:49:02 crc kubenswrapper[5120]: --init-cluster-manager "${K8S_NODE}" \ Jan 22 11:49:02 crc kubenswrapper[5120]: --config-file=/run/ovnkube-config/ovnkube.conf \ Jan 22 11:49:02 crc kubenswrapper[5120]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Jan 22 11:49:02 crc kubenswrapper[5120]: --metrics-bind-address "127.0.0.1:29108" \ Jan 22 11:49:02 crc kubenswrapper[5120]: --metrics-enable-pprof \ Jan 22 11:49:02 crc kubenswrapper[5120]: --metrics-enable-config-duration \ Jan 22 11:49:02 crc kubenswrapper[5120]: ${ovn_v4_join_subnet_opt} \ Jan 22 11:49:02 crc kubenswrapper[5120]: ${ovn_v6_join_subnet_opt} \ Jan 22 11:49:02 crc kubenswrapper[5120]: ${ovn_v4_transit_switch_subnet_opt} \ Jan 22 11:49:02 crc kubenswrapper[5120]: ${ovn_v6_transit_switch_subnet_opt} \ Jan 22 11:49:02 crc kubenswrapper[5120]: ${dns_name_resolver_enabled_flag} \ Jan 22 11:49:02 crc kubenswrapper[5120]: ${persistent_ips_enabled_flag} \ Jan 22 11:49:02 crc kubenswrapper[5120]: ${multi_network_enabled_flag} \ Jan 22 11:49:02 crc kubenswrapper[5120]: ${network_segmentation_enabled_flag} \ Jan 22 11:49:02 crc kubenswrapper[5120]: ${gateway_mode_flags} \ Jan 22 11:49:02 crc kubenswrapper[5120]: ${route_advertisements_enable_flag} \ Jan 22 11:49:02 crc kubenswrapper[5120]: ${preconfigured_udn_addresses_enable_flag} \ Jan 22 11:49:02 crc kubenswrapper[5120]: --enable-egress-ip=true \ Jan 22 11:49:02 crc kubenswrapper[5120]: --enable-egress-firewall=true \ Jan 22 11:49:02 crc kubenswrapper[5120]: --enable-egress-qos=true \ Jan 22 11:49:02 crc kubenswrapper[5120]: --enable-egress-service=true \ Jan 22 11:49:02 crc kubenswrapper[5120]: --enable-multicast \ Jan 22 11:49:02 crc kubenswrapper[5120]: --enable-multi-external-gateway=true \ Jan 22 11:49:02 crc kubenswrapper[5120]: ${multi_network_policy_enabled_flag} \ Jan 22 11:49:02 crc kubenswrapper[5120]: ${admin_network_policy_enabled_flag} Jan 22 11:49:02 crc kubenswrapper[5120]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9lt4m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-xzh79_openshift-ovn-kubernetes(cdb50da0-eb06-4959-b8da-70919924f77e): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 22 11:49:02 crc kubenswrapper[5120]: > logger="UnhandledError" Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.030338 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79" podUID="cdb50da0-eb06-4959-b8da-70919924f77e" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.032259 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scbgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scbgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dq269\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.040470 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ldwx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dababdca-8afb-452f-865f-54de3aec21d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kndcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kndcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ldwx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.058213 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.059386 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.059440 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.059452 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.059466 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.059476 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:02Z","lastTransitionTime":"2026-01-22T11:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.098038 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.153320 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2mf7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.161584 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.161681 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.161709 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.161744 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.161777 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:02Z","lastTransitionTime":"2026-01-22T11:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.184039 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc6361ac-72d0-485c-938e-c58010f57d78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://4b2fc2ec264e1a2f47ef48ae3682ece70e9bcb0c27191badb3dbb25d763d6ed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://d8530587a7dacf7f1e414d966e228d915e25d07d268990a0cbd418ca534f37e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7d6d0b4ca0fcc7c60a642256079a5ccee5482c56dd372189b46a95401451fa45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d115df90471eae10a65aefb390195da3593e903d0ad1a730847db2d29a63cc7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.238572 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.238659 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.238732 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.238764 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.239048 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.239077 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.239095 5120 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.239187 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:03.239159542 +0000 UTC m=+77.983107893 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.239293 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.239310 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.239322 5120 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.239360 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:03.239347817 +0000 UTC m=+77.983296178 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.239426 5120 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.239466 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:03.239455549 +0000 UTC m=+77.983403900 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.239534 5120 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.239569 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:03.239559592 +0000 UTC m=+77.983507953 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.243950 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39c0a299-bb61-4f5d-8177-544cd4abe1ad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://032f1b1cf07b4a93c23326f05479f43fba3a3cf6bb4b9f6c3ae29a76050edfe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:49Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://4985527bf2ab9cc933f70f9ea2994a77482f8a24299c8efc8321a3fd5d86a203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:49Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://209c9652e04417a0d9d549aa169eae5834fadfd0f9dca2eb8620fc81f999192a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:49Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7276e56b446c98c69bd713b22bf844b5cae42b8a0d8da7b8fb151efc140381ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:49Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://36ea9f6809070fa9f7f4b7e5c40fae1648814d3b300a273a28c80ea6035f76a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://74e9a1ca4941ec2eb248aac427dc7bbbb75c43b4680680c221c5eaf186b5986b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74e9a1ca4941ec2eb248aac427dc7bbbb75c43b4680680c221c5eaf186b5986b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://028713bf3e9d1dc75729378d49c58defe47bb7fc8dadd99d93e91304cec6cf84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://028713bf3e9d1dc75729378d49c58defe47bb7fc8dadd99d93e91304cec6cf84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://6b3b9f5c7630e7e80fee0c6bceb378b3069a777f25552b1f309325e0a12134ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b3b9f5c7630e7e80fee0c6bceb378b3069a777f25552b1f309325e0a12134ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.265795 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.265902 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.265933 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.266063 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.266097 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:02Z","lastTransitionTime":"2026-01-22T11:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.271381 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.306696 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scbgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scbgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dq269\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.339291 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ldwx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dababdca-8afb-452f-865f-54de3aec21d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kndcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kndcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ldwx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.339578 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.339756 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:49:03.339729097 +0000 UTC m=+78.083677448 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.368332 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.368384 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.368395 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.368413 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.368424 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:02Z","lastTransitionTime":"2026-01-22T11:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.379926 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.421764 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.441563 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dababdca-8afb-452f-865f-54de3aec21d9-metrics-certs\") pod \"network-metrics-daemon-ldwx4\" (UID: \"dababdca-8afb-452f-865f-54de3aec21d9\") " pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.441764 5120 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.441842 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dababdca-8afb-452f-865f-54de3aec21d9-metrics-certs podName:dababdca-8afb-452f-865f-54de3aec21d9 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:03.441821339 +0000 UTC m=+78.185769690 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dababdca-8afb-452f-865f-54de3aec21d9-metrics-certs") pod "network-metrics-daemon-ldwx4" (UID: "dababdca-8afb-452f-865f-54de3aec21d9") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.464541 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2mf7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.471400 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.471466 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.471480 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.471504 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.471517 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:02Z","lastTransitionTime":"2026-01-22T11:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.503034 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410ef417-8c38-4aac-9a75-c1a938b0cf8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://911cf90f454467de717e1f9bb20b825a5be262103e70d8507cf0069f6044f56c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3ebb490a3adef5a0bb92ba36215125157bd696a19543743e029f6ef8d7ddaf9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://64d17043c5bd9fe7e126416520a376da7a3779ed00b20eb4d36e1651e0e4deb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://99b634350c36056ac94a43bb1050fb0a41c21441966a10fdfe3aeae30cfd0c2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99b634350c36056ac94a43bb1050fb0a41c21441966a10fdfe3aeae30cfd0c2f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T11:48:52Z\\\",\\\"message\\\":\\\"var.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"ClientsAllowCBOR\\\\\\\" enabled=false\\\\nW0122 11:48:51.105406 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 11:48:51.105599 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0122 11:48:51.106804 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1158037108/tls.crt::/tmp/serving-cert-1158037108/tls.key\\\\\\\"\\\\nI0122 11:48:52.103234 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 11:48:52.104987 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 11:48:52.105003 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 11:48:52.105030 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 11:48:52.105035 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 11:48:52.112491 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 11:48:52.112515 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 11:48:52.112520 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 11:48:52.112524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 11:48:52.112528 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 11:48:52.112531 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 11:48:52.112534 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 11:48:52.112540 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 11:48:52.115022 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:48:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fbc73082c8fc6e4c53f063e1d1446fff9c541a208f3ab11d7c687b5b06958ebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8940990b4eeab47177be3a76a9fc4894d28308e94e4c45050915ec740b778a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8940990b4eeab47177be3a76a9fc4894d28308e94e4c45050915ec740b778a41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.540783 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4822d3cd-955f-493d-a818-acebb52b3602\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://caf1ed97ccb35c8ce9c3321194645452c5875bdadb4b2634d00114c1cedc1056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://91363fceef321ca9f1495cd188f848fae974f94b1b5732adbab842efc578074c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ad731d2d8530eae95dec603d9f7a060ea885c926d453b983464949e2eb4fc2d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d3ad48ffe8f14cdb9c09a6ed7b7da5d4db116a1dac0653103da063524734f466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3ad48ffe8f14cdb9c09a6ed7b7da5d4db116a1dac0653103da063524734f466\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.574116 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.574180 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.574190 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.574221 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.574231 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:02Z","lastTransitionTime":"2026-01-22T11:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.577869 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7027ae84-efaa-474d-9221-28d77dc0af15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fa31f4d5e4e6f36d31ea882d29804b21ad3c620e6f31cf12aec3085ed0f9f9b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0f232b2402a84370f16fcd5fe49fb57391d5d49d1df96442b937914a9ad6ad54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f232b2402a84370f16fcd5fe49fb57391d5d49d1df96442b937914a9ad6ad54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.620242 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.658738 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tf9nb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9f485fd-0793-40a0-abf8-12fd3b612c87\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdqkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tf9nb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.676689 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.676771 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.676783 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.676806 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.676821 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:02Z","lastTransitionTime":"2026-01-22T11:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.704935 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdb50da0-eb06-4959-b8da-70919924f77e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lt4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lt4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-xzh79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.738725 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.777600 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.778496 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.778534 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.778546 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.778562 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.778572 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:02Z","lastTransitionTime":"2026-01-22T11:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.816371 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-wrdkl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eaa5719f-fed8-44ac-a759-d2c22d9a2a7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgcrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wrdkl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.860587 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rg989" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97df0621-ddba-4462-8134-59bc671c7351\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rg989\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.880724 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.880762 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.880782 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.880800 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.880812 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:02Z","lastTransitionTime":"2026-01-22T11:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.897259 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-4lzht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz7fj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4lzht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.914732 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-tf9nb" event={"ID":"f9f485fd-0793-40a0-abf8-12fd3b612c87","Type":"ContainerStarted","Data":"a6e0c823a1210b5b9380e5060667c155023baf8bda5d5ab1e94bc885f2b1e0bb"} Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.916684 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rg989" event={"ID":"97df0621-ddba-4462-8134-59bc671c7351","Type":"ContainerStarted","Data":"bfb2fa8324043129075f91f76cc2cd600947936a1c269fd1d116dfd187774826"} Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.916894 5120 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 22 11:49:02 crc kubenswrapper[5120]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Jan 22 11:49:02 crc kubenswrapper[5120]: while [ true ]; Jan 22 11:49:02 crc kubenswrapper[5120]: do Jan 22 11:49:02 crc kubenswrapper[5120]: for f in $(ls /tmp/serviceca); do Jan 22 11:49:02 crc kubenswrapper[5120]: echo $f Jan 22 11:49:02 crc kubenswrapper[5120]: ca_file_path="/tmp/serviceca/${f}" Jan 22 11:49:02 crc kubenswrapper[5120]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Jan 22 11:49:02 crc kubenswrapper[5120]: reg_dir_path="/etc/docker/certs.d/${f}" Jan 22 11:49:02 crc kubenswrapper[5120]: if [ -e "${reg_dir_path}" ]; then Jan 22 11:49:02 crc kubenswrapper[5120]: cp -u $ca_file_path $reg_dir_path/ca.crt Jan 22 11:49:02 crc kubenswrapper[5120]: else Jan 22 11:49:02 crc kubenswrapper[5120]: mkdir $reg_dir_path Jan 22 11:49:02 crc kubenswrapper[5120]: cp $ca_file_path $reg_dir_path/ca.crt Jan 22 11:49:02 crc kubenswrapper[5120]: fi Jan 22 11:49:02 crc kubenswrapper[5120]: done Jan 22 11:49:02 crc kubenswrapper[5120]: for d in $(ls /etc/docker/certs.d); do Jan 22 11:49:02 crc kubenswrapper[5120]: echo $d Jan 22 11:49:02 crc kubenswrapper[5120]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Jan 22 11:49:02 crc kubenswrapper[5120]: reg_conf_path="/tmp/serviceca/${dp}" Jan 22 11:49:02 crc kubenswrapper[5120]: if [ ! -e "${reg_conf_path}" ]; then Jan 22 11:49:02 crc kubenswrapper[5120]: rm -rf /etc/docker/certs.d/$d Jan 22 11:49:02 crc kubenswrapper[5120]: fi Jan 22 11:49:02 crc kubenswrapper[5120]: done Jan 22 11:49:02 crc kubenswrapper[5120]: sleep 60 & wait ${!} Jan 22 11:49:02 crc kubenswrapper[5120]: done Jan 22 11:49:02 crc kubenswrapper[5120]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wdqkj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-tf9nb_openshift-image-registry(f9f485fd-0793-40a0-abf8-12fd3b612c87): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 22 11:49:02 crc kubenswrapper[5120]: > logger="UnhandledError" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.917724 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79" event={"ID":"cdb50da0-eb06-4959-b8da-70919924f77e","Type":"ContainerStarted","Data":"20963fbe51218d226586341531cebabcba165784d34f9b709674547be7d8df72"} Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.917991 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-tf9nb" podUID="f9f485fd-0793-40a0-abf8-12fd3b612c87" Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.918647 5120 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cs4xp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-rg989_openshift-multus(97df0621-ddba-4462-8134-59bc671c7351): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.918989 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4lzht" event={"ID":"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087","Type":"ContainerStarted","Data":"082b622a176aa05319a4fc66bfbdcdbb3ba81ad686d896f0acc0ae2f995c8919"} Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.919298 5120 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 22 11:49:02 crc kubenswrapper[5120]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Jan 22 11:49:02 crc kubenswrapper[5120]: set -euo pipefail Jan 22 11:49:02 crc kubenswrapper[5120]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Jan 22 11:49:02 crc kubenswrapper[5120]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Jan 22 11:49:02 crc kubenswrapper[5120]: # As the secret mount is optional we must wait for the files to be present. Jan 22 11:49:02 crc kubenswrapper[5120]: # The service is created in monitor.yaml and this is created in sdn.yaml. Jan 22 11:49:02 crc kubenswrapper[5120]: TS=$(date +%s) Jan 22 11:49:02 crc kubenswrapper[5120]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Jan 22 11:49:02 crc kubenswrapper[5120]: HAS_LOGGED_INFO=0 Jan 22 11:49:02 crc kubenswrapper[5120]: Jan 22 11:49:02 crc kubenswrapper[5120]: log_missing_certs(){ Jan 22 11:49:02 crc kubenswrapper[5120]: CUR_TS=$(date +%s) Jan 22 11:49:02 crc kubenswrapper[5120]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Jan 22 11:49:02 crc kubenswrapper[5120]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Jan 22 11:49:02 crc kubenswrapper[5120]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Jan 22 11:49:02 crc kubenswrapper[5120]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Jan 22 11:49:02 crc kubenswrapper[5120]: HAS_LOGGED_INFO=1 Jan 22 11:49:02 crc kubenswrapper[5120]: fi Jan 22 11:49:02 crc kubenswrapper[5120]: } Jan 22 11:49:02 crc kubenswrapper[5120]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Jan 22 11:49:02 crc kubenswrapper[5120]: log_missing_certs Jan 22 11:49:02 crc kubenswrapper[5120]: sleep 5 Jan 22 11:49:02 crc kubenswrapper[5120]: done Jan 22 11:49:02 crc kubenswrapper[5120]: Jan 22 11:49:02 crc kubenswrapper[5120]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Jan 22 11:49:02 crc kubenswrapper[5120]: exec /usr/bin/kube-rbac-proxy \ Jan 22 11:49:02 crc kubenswrapper[5120]: --logtostderr \ Jan 22 11:49:02 crc kubenswrapper[5120]: --secure-listen-address=:9108 \ Jan 22 11:49:02 crc kubenswrapper[5120]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Jan 22 11:49:02 crc kubenswrapper[5120]: --upstream=http://127.0.0.1:29108/ \ Jan 22 11:49:02 crc kubenswrapper[5120]: --tls-private-key-file=${TLS_PK} \ Jan 22 11:49:02 crc kubenswrapper[5120]: --tls-cert-file=${TLS_CERT} Jan 22 11:49:02 crc kubenswrapper[5120]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9lt4m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-xzh79_openshift-ovn-kubernetes(cdb50da0-eb06-4959-b8da-70919924f77e): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 22 11:49:02 crc kubenswrapper[5120]: > logger="UnhandledError" Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.919869 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-rg989" podUID="97df0621-ddba-4462-8134-59bc671c7351" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.920094 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" event={"ID":"dd62bdde-a6c1-42b3-9585-ba64c63cbb51","Type":"ContainerStarted","Data":"948f3922f0403f01af9c080b4700105b9cfcfffd97d2155e3cc2c89092d9038d"} Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.920947 5120 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 22 11:49:02 crc kubenswrapper[5120]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Jan 22 11:49:02 crc kubenswrapper[5120]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Jan 22 11:49:02 crc kubenswrapper[5120]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zz7fj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-4lzht_openshift-multus(67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 22 11:49:02 crc kubenswrapper[5120]: > logger="UnhandledError" Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.921258 5120 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 22 11:49:02 crc kubenswrapper[5120]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 22 11:49:02 crc kubenswrapper[5120]: if [[ -f "/env/_master" ]]; then Jan 22 11:49:02 crc kubenswrapper[5120]: set -o allexport Jan 22 11:49:02 crc kubenswrapper[5120]: source "/env/_master" Jan 22 11:49:02 crc kubenswrapper[5120]: set +o allexport Jan 22 11:49:02 crc kubenswrapper[5120]: fi Jan 22 11:49:02 crc kubenswrapper[5120]: Jan 22 11:49:02 crc kubenswrapper[5120]: ovn_v4_join_subnet_opt= Jan 22 11:49:02 crc kubenswrapper[5120]: if [[ "" != "" ]]; then Jan 22 11:49:02 crc kubenswrapper[5120]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Jan 22 11:49:02 crc kubenswrapper[5120]: fi Jan 22 11:49:02 crc kubenswrapper[5120]: ovn_v6_join_subnet_opt= Jan 22 11:49:02 crc kubenswrapper[5120]: if [[ "" != "" ]]; then Jan 22 11:49:02 crc kubenswrapper[5120]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Jan 22 11:49:02 crc kubenswrapper[5120]: fi Jan 22 11:49:02 crc kubenswrapper[5120]: Jan 22 11:49:02 crc kubenswrapper[5120]: ovn_v4_transit_switch_subnet_opt= Jan 22 11:49:02 crc kubenswrapper[5120]: if [[ "" != "" ]]; then Jan 22 11:49:02 crc kubenswrapper[5120]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Jan 22 11:49:02 crc kubenswrapper[5120]: fi Jan 22 11:49:02 crc kubenswrapper[5120]: ovn_v6_transit_switch_subnet_opt= Jan 22 11:49:02 crc kubenswrapper[5120]: if [[ "" != "" ]]; then Jan 22 11:49:02 crc kubenswrapper[5120]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Jan 22 11:49:02 crc kubenswrapper[5120]: fi Jan 22 11:49:02 crc kubenswrapper[5120]: Jan 22 11:49:02 crc kubenswrapper[5120]: dns_name_resolver_enabled_flag= Jan 22 11:49:02 crc kubenswrapper[5120]: if [[ "false" == "true" ]]; then Jan 22 11:49:02 crc kubenswrapper[5120]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Jan 22 11:49:02 crc kubenswrapper[5120]: fi Jan 22 11:49:02 crc kubenswrapper[5120]: Jan 22 11:49:02 crc kubenswrapper[5120]: persistent_ips_enabled_flag="--enable-persistent-ips" Jan 22 11:49:02 crc kubenswrapper[5120]: Jan 22 11:49:02 crc kubenswrapper[5120]: # This is needed so that converting clusters from GA to TP Jan 22 11:49:02 crc kubenswrapper[5120]: # will rollout control plane pods as well Jan 22 11:49:02 crc kubenswrapper[5120]: network_segmentation_enabled_flag= Jan 22 11:49:02 crc kubenswrapper[5120]: multi_network_enabled_flag= Jan 22 11:49:02 crc kubenswrapper[5120]: if [[ "true" == "true" ]]; then Jan 22 11:49:02 crc kubenswrapper[5120]: multi_network_enabled_flag="--enable-multi-network" Jan 22 11:49:02 crc kubenswrapper[5120]: fi Jan 22 11:49:02 crc kubenswrapper[5120]: if [[ "true" == "true" ]]; then Jan 22 11:49:02 crc kubenswrapper[5120]: if [[ "true" != "true" ]]; then Jan 22 11:49:02 crc kubenswrapper[5120]: multi_network_enabled_flag="--enable-multi-network" Jan 22 11:49:02 crc kubenswrapper[5120]: fi Jan 22 11:49:02 crc kubenswrapper[5120]: network_segmentation_enabled_flag="--enable-network-segmentation" Jan 22 11:49:02 crc kubenswrapper[5120]: fi Jan 22 11:49:02 crc kubenswrapper[5120]: Jan 22 11:49:02 crc kubenswrapper[5120]: route_advertisements_enable_flag= Jan 22 11:49:02 crc kubenswrapper[5120]: if [[ "false" == "true" ]]; then Jan 22 11:49:02 crc kubenswrapper[5120]: route_advertisements_enable_flag="--enable-route-advertisements" Jan 22 11:49:02 crc kubenswrapper[5120]: fi Jan 22 11:49:02 crc kubenswrapper[5120]: Jan 22 11:49:02 crc kubenswrapper[5120]: preconfigured_udn_addresses_enable_flag= Jan 22 11:49:02 crc kubenswrapper[5120]: if [[ "false" == "true" ]]; then Jan 22 11:49:02 crc kubenswrapper[5120]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Jan 22 11:49:02 crc kubenswrapper[5120]: fi Jan 22 11:49:02 crc kubenswrapper[5120]: Jan 22 11:49:02 crc kubenswrapper[5120]: # Enable multi-network policy if configured (control-plane always full mode) Jan 22 11:49:02 crc kubenswrapper[5120]: multi_network_policy_enabled_flag= Jan 22 11:49:02 crc kubenswrapper[5120]: if [[ "false" == "true" ]]; then Jan 22 11:49:02 crc kubenswrapper[5120]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Jan 22 11:49:02 crc kubenswrapper[5120]: fi Jan 22 11:49:02 crc kubenswrapper[5120]: Jan 22 11:49:02 crc kubenswrapper[5120]: # Enable admin network policy if configured (control-plane always full mode) Jan 22 11:49:02 crc kubenswrapper[5120]: admin_network_policy_enabled_flag= Jan 22 11:49:02 crc kubenswrapper[5120]: if [[ "true" == "true" ]]; then Jan 22 11:49:02 crc kubenswrapper[5120]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Jan 22 11:49:02 crc kubenswrapper[5120]: fi Jan 22 11:49:02 crc kubenswrapper[5120]: Jan 22 11:49:02 crc kubenswrapper[5120]: if [ "shared" == "shared" ]; then Jan 22 11:49:02 crc kubenswrapper[5120]: gateway_mode_flags="--gateway-mode shared" Jan 22 11:49:02 crc kubenswrapper[5120]: elif [ "shared" == "local" ]; then Jan 22 11:49:02 crc kubenswrapper[5120]: gateway_mode_flags="--gateway-mode local" Jan 22 11:49:02 crc kubenswrapper[5120]: else Jan 22 11:49:02 crc kubenswrapper[5120]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Jan 22 11:49:02 crc kubenswrapper[5120]: exit 1 Jan 22 11:49:02 crc kubenswrapper[5120]: fi Jan 22 11:49:02 crc kubenswrapper[5120]: Jan 22 11:49:02 crc kubenswrapper[5120]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Jan 22 11:49:02 crc kubenswrapper[5120]: exec /usr/bin/ovnkube \ Jan 22 11:49:02 crc kubenswrapper[5120]: --enable-interconnect \ Jan 22 11:49:02 crc kubenswrapper[5120]: --init-cluster-manager "${K8S_NODE}" \ Jan 22 11:49:02 crc kubenswrapper[5120]: --config-file=/run/ovnkube-config/ovnkube.conf \ Jan 22 11:49:02 crc kubenswrapper[5120]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Jan 22 11:49:02 crc kubenswrapper[5120]: --metrics-bind-address "127.0.0.1:29108" \ Jan 22 11:49:02 crc kubenswrapper[5120]: --metrics-enable-pprof \ Jan 22 11:49:02 crc kubenswrapper[5120]: --metrics-enable-config-duration \ Jan 22 11:49:02 crc kubenswrapper[5120]: ${ovn_v4_join_subnet_opt} \ Jan 22 11:49:02 crc kubenswrapper[5120]: ${ovn_v6_join_subnet_opt} \ Jan 22 11:49:02 crc kubenswrapper[5120]: ${ovn_v4_transit_switch_subnet_opt} \ Jan 22 11:49:02 crc kubenswrapper[5120]: ${ovn_v6_transit_switch_subnet_opt} \ Jan 22 11:49:02 crc kubenswrapper[5120]: ${dns_name_resolver_enabled_flag} \ Jan 22 11:49:02 crc kubenswrapper[5120]: ${persistent_ips_enabled_flag} \ Jan 22 11:49:02 crc kubenswrapper[5120]: ${multi_network_enabled_flag} \ Jan 22 11:49:02 crc kubenswrapper[5120]: ${network_segmentation_enabled_flag} \ Jan 22 11:49:02 crc kubenswrapper[5120]: ${gateway_mode_flags} \ Jan 22 11:49:02 crc kubenswrapper[5120]: ${route_advertisements_enable_flag} \ Jan 22 11:49:02 crc kubenswrapper[5120]: ${preconfigured_udn_addresses_enable_flag} \ Jan 22 11:49:02 crc kubenswrapper[5120]: --enable-egress-ip=true \ Jan 22 11:49:02 crc kubenswrapper[5120]: --enable-egress-firewall=true \ Jan 22 11:49:02 crc kubenswrapper[5120]: --enable-egress-qos=true \ Jan 22 11:49:02 crc kubenswrapper[5120]: --enable-egress-service=true \ Jan 22 11:49:02 crc kubenswrapper[5120]: --enable-multicast \ Jan 22 11:49:02 crc kubenswrapper[5120]: --enable-multi-external-gateway=true \ Jan 22 11:49:02 crc kubenswrapper[5120]: ${multi_network_policy_enabled_flag} \ Jan 22 11:49:02 crc kubenswrapper[5120]: ${admin_network_policy_enabled_flag} Jan 22 11:49:02 crc kubenswrapper[5120]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9lt4m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-xzh79_openshift-ovn-kubernetes(cdb50da0-eb06-4959-b8da-70919924f77e): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 22 11:49:02 crc kubenswrapper[5120]: > logger="UnhandledError" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.921331 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" event={"ID":"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9","Type":"ContainerStarted","Data":"89844fac781a686f2175b05b0f7c607c93448977e06c70e055b15e62df93a488"} Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.921717 5120 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 22 11:49:02 crc kubenswrapper[5120]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Jan 22 11:49:02 crc kubenswrapper[5120]: apiVersion: v1 Jan 22 11:49:02 crc kubenswrapper[5120]: clusters: Jan 22 11:49:02 crc kubenswrapper[5120]: - cluster: Jan 22 11:49:02 crc kubenswrapper[5120]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Jan 22 11:49:02 crc kubenswrapper[5120]: server: https://api-int.crc.testing:6443 Jan 22 11:49:02 crc kubenswrapper[5120]: name: default-cluster Jan 22 11:49:02 crc kubenswrapper[5120]: contexts: Jan 22 11:49:02 crc kubenswrapper[5120]: - context: Jan 22 11:49:02 crc kubenswrapper[5120]: cluster: default-cluster Jan 22 11:49:02 crc kubenswrapper[5120]: namespace: default Jan 22 11:49:02 crc kubenswrapper[5120]: user: default-auth Jan 22 11:49:02 crc kubenswrapper[5120]: name: default-context Jan 22 11:49:02 crc kubenswrapper[5120]: current-context: default-context Jan 22 11:49:02 crc kubenswrapper[5120]: kind: Config Jan 22 11:49:02 crc kubenswrapper[5120]: preferences: {} Jan 22 11:49:02 crc kubenswrapper[5120]: users: Jan 22 11:49:02 crc kubenswrapper[5120]: - name: default-auth Jan 22 11:49:02 crc kubenswrapper[5120]: user: Jan 22 11:49:02 crc kubenswrapper[5120]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Jan 22 11:49:02 crc kubenswrapper[5120]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Jan 22 11:49:02 crc kubenswrapper[5120]: EOF Jan 22 11:49:02 crc kubenswrapper[5120]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zdzrm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-2mf7v_openshift-ovn-kubernetes(dd62bdde-a6c1-42b3-9585-ba64c63cbb51): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 22 11:49:02 crc kubenswrapper[5120]: > logger="UnhandledError" Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.922060 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-4lzht" podUID="67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.922323 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-wrdkl" event={"ID":"eaa5719f-fed8-44ac-a759-d2c22d9a2a7f","Type":"ContainerStarted","Data":"cead94ca34f70bd435c09fd64bff64731b52e59517244bfd77f36dc376930de5"} Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.922329 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79" podUID="cdb50da0-eb06-4959-b8da-70919924f77e" Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.923143 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" podUID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.923168 5120 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-scbgq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.923727 5120 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 22 11:49:02 crc kubenswrapper[5120]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Jan 22 11:49:02 crc kubenswrapper[5120]: set -uo pipefail Jan 22 11:49:02 crc kubenswrapper[5120]: Jan 22 11:49:02 crc kubenswrapper[5120]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Jan 22 11:49:02 crc kubenswrapper[5120]: Jan 22 11:49:02 crc kubenswrapper[5120]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Jan 22 11:49:02 crc kubenswrapper[5120]: HOSTS_FILE="/etc/hosts" Jan 22 11:49:02 crc kubenswrapper[5120]: TEMP_FILE="/tmp/hosts.tmp" Jan 22 11:49:02 crc kubenswrapper[5120]: Jan 22 11:49:02 crc kubenswrapper[5120]: IFS=', ' read -r -a services <<< "${SERVICES}" Jan 22 11:49:02 crc kubenswrapper[5120]: Jan 22 11:49:02 crc kubenswrapper[5120]: # Make a temporary file with the old hosts file's attributes. Jan 22 11:49:02 crc kubenswrapper[5120]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Jan 22 11:49:02 crc kubenswrapper[5120]: echo "Failed to preserve hosts file. Exiting." Jan 22 11:49:02 crc kubenswrapper[5120]: exit 1 Jan 22 11:49:02 crc kubenswrapper[5120]: fi Jan 22 11:49:02 crc kubenswrapper[5120]: Jan 22 11:49:02 crc kubenswrapper[5120]: while true; do Jan 22 11:49:02 crc kubenswrapper[5120]: declare -A svc_ips Jan 22 11:49:02 crc kubenswrapper[5120]: for svc in "${services[@]}"; do Jan 22 11:49:02 crc kubenswrapper[5120]: # Fetch service IP from cluster dns if present. We make several tries Jan 22 11:49:02 crc kubenswrapper[5120]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Jan 22 11:49:02 crc kubenswrapper[5120]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Jan 22 11:49:02 crc kubenswrapper[5120]: # support UDP loadbalancers and require reaching DNS through TCP. Jan 22 11:49:02 crc kubenswrapper[5120]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 22 11:49:02 crc kubenswrapper[5120]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 22 11:49:02 crc kubenswrapper[5120]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 22 11:49:02 crc kubenswrapper[5120]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Jan 22 11:49:02 crc kubenswrapper[5120]: for i in ${!cmds[*]} Jan 22 11:49:02 crc kubenswrapper[5120]: do Jan 22 11:49:02 crc kubenswrapper[5120]: ips=($(eval "${cmds[i]}")) Jan 22 11:49:02 crc kubenswrapper[5120]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Jan 22 11:49:02 crc kubenswrapper[5120]: svc_ips["${svc}"]="${ips[@]}" Jan 22 11:49:02 crc kubenswrapper[5120]: break Jan 22 11:49:02 crc kubenswrapper[5120]: fi Jan 22 11:49:02 crc kubenswrapper[5120]: done Jan 22 11:49:02 crc kubenswrapper[5120]: done Jan 22 11:49:02 crc kubenswrapper[5120]: Jan 22 11:49:02 crc kubenswrapper[5120]: # Update /etc/hosts only if we get valid service IPs Jan 22 11:49:02 crc kubenswrapper[5120]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Jan 22 11:49:02 crc kubenswrapper[5120]: # Stale entries could exist in /etc/hosts if the service is deleted Jan 22 11:49:02 crc kubenswrapper[5120]: if [[ -n "${svc_ips[*]-}" ]]; then Jan 22 11:49:02 crc kubenswrapper[5120]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Jan 22 11:49:02 crc kubenswrapper[5120]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Jan 22 11:49:02 crc kubenswrapper[5120]: # Only continue rebuilding the hosts entries if its original content is preserved Jan 22 11:49:02 crc kubenswrapper[5120]: sleep 60 & wait Jan 22 11:49:02 crc kubenswrapper[5120]: continue Jan 22 11:49:02 crc kubenswrapper[5120]: fi Jan 22 11:49:02 crc kubenswrapper[5120]: Jan 22 11:49:02 crc kubenswrapper[5120]: # Append resolver entries for services Jan 22 11:49:02 crc kubenswrapper[5120]: rc=0 Jan 22 11:49:02 crc kubenswrapper[5120]: for svc in "${!svc_ips[@]}"; do Jan 22 11:49:02 crc kubenswrapper[5120]: for ip in ${svc_ips[${svc}]}; do Jan 22 11:49:02 crc kubenswrapper[5120]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Jan 22 11:49:02 crc kubenswrapper[5120]: done Jan 22 11:49:02 crc kubenswrapper[5120]: done Jan 22 11:49:02 crc kubenswrapper[5120]: if [[ $rc -ne 0 ]]; then Jan 22 11:49:02 crc kubenswrapper[5120]: sleep 60 & wait Jan 22 11:49:02 crc kubenswrapper[5120]: continue Jan 22 11:49:02 crc kubenswrapper[5120]: fi Jan 22 11:49:02 crc kubenswrapper[5120]: Jan 22 11:49:02 crc kubenswrapper[5120]: Jan 22 11:49:02 crc kubenswrapper[5120]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Jan 22 11:49:02 crc kubenswrapper[5120]: # Replace /etc/hosts with our modified version if needed Jan 22 11:49:02 crc kubenswrapper[5120]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Jan 22 11:49:02 crc kubenswrapper[5120]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Jan 22 11:49:02 crc kubenswrapper[5120]: fi Jan 22 11:49:02 crc kubenswrapper[5120]: sleep 60 & wait Jan 22 11:49:02 crc kubenswrapper[5120]: unset svc_ips Jan 22 11:49:02 crc kubenswrapper[5120]: done Jan 22 11:49:02 crc kubenswrapper[5120]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dgcrk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-wrdkl_openshift-dns(eaa5719f-fed8-44ac-a759-d2c22d9a2a7f): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 22 11:49:02 crc kubenswrapper[5120]: > logger="UnhandledError" Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.924771 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-wrdkl" podUID="eaa5719f-fed8-44ac-a759-d2c22d9a2a7f" Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.925190 5120 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-scbgq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.926376 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.938460 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rg989" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97df0621-ddba-4462-8134-59bc671c7351\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rg989\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.978367 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-4lzht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz7fj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4lzht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.982720 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.982773 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.982788 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.982811 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.982826 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:02Z","lastTransitionTime":"2026-01-22T11:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.018886 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc6361ac-72d0-485c-938e-c58010f57d78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://4b2fc2ec264e1a2f47ef48ae3682ece70e9bcb0c27191badb3dbb25d763d6ed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://d8530587a7dacf7f1e414d966e228d915e25d07d268990a0cbd418ca534f37e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7d6d0b4ca0fcc7c60a642256079a5ccee5482c56dd372189b46a95401451fa45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d115df90471eae10a65aefb390195da3593e903d0ad1a730847db2d29a63cc7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.065365 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39c0a299-bb61-4f5d-8177-544cd4abe1ad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://032f1b1cf07b4a93c23326f05479f43fba3a3cf6bb4b9f6c3ae29a76050edfe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:49Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://4985527bf2ab9cc933f70f9ea2994a77482f8a24299c8efc8321a3fd5d86a203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:49Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://209c9652e04417a0d9d549aa169eae5834fadfd0f9dca2eb8620fc81f999192a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:49Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7276e56b446c98c69bd713b22bf844b5cae42b8a0d8da7b8fb151efc140381ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:49Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://36ea9f6809070fa9f7f4b7e5c40fae1648814d3b300a273a28c80ea6035f76a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://74e9a1ca4941ec2eb248aac427dc7bbbb75c43b4680680c221c5eaf186b5986b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74e9a1ca4941ec2eb248aac427dc7bbbb75c43b4680680c221c5eaf186b5986b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://028713bf3e9d1dc75729378d49c58defe47bb7fc8dadd99d93e91304cec6cf84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://028713bf3e9d1dc75729378d49c58defe47bb7fc8dadd99d93e91304cec6cf84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://6b3b9f5c7630e7e80fee0c6bceb378b3069a777f25552b1f309325e0a12134ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b3b9f5c7630e7e80fee0c6bceb378b3069a777f25552b1f309325e0a12134ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.085268 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.085336 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.085346 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.085362 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.085370 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:03Z","lastTransitionTime":"2026-01-22T11:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.098875 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.139456 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scbgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scbgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dq269\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.177729 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ldwx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dababdca-8afb-452f-865f-54de3aec21d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kndcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kndcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ldwx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.187276 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.187347 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.187383 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.187405 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.187415 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:03Z","lastTransitionTime":"2026-01-22T11:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.219517 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.252832 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.252880 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.252938 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.253006 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:03 crc kubenswrapper[5120]: E0122 11:49:03.253091 5120 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 11:49:03 crc kubenswrapper[5120]: E0122 11:49:03.253146 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:05.253131089 +0000 UTC m=+79.997079450 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 11:49:03 crc kubenswrapper[5120]: E0122 11:49:03.253469 5120 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 11:49:03 crc kubenswrapper[5120]: E0122 11:49:03.253552 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:05.25354024 +0000 UTC m=+79.997488591 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 11:49:03 crc kubenswrapper[5120]: E0122 11:49:03.253616 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 11:49:03 crc kubenswrapper[5120]: E0122 11:49:03.253634 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 11:49:03 crc kubenswrapper[5120]: E0122 11:49:03.253648 5120 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:49:03 crc kubenswrapper[5120]: E0122 11:49:03.253679 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:05.253669243 +0000 UTC m=+79.997617594 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:49:03 crc kubenswrapper[5120]: E0122 11:49:03.253730 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 11:49:03 crc kubenswrapper[5120]: E0122 11:49:03.253741 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 11:49:03 crc kubenswrapper[5120]: E0122 11:49:03.253750 5120 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:49:03 crc kubenswrapper[5120]: E0122 11:49:03.253778 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:05.253769075 +0000 UTC m=+79.997717436 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.256426 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.289774 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.289812 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.289822 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.289835 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.289844 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:03Z","lastTransitionTime":"2026-01-22T11:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.302379 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2mf7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.339278 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410ef417-8c38-4aac-9a75-c1a938b0cf8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://911cf90f454467de717e1f9bb20b825a5be262103e70d8507cf0069f6044f56c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3ebb490a3adef5a0bb92ba36215125157bd696a19543743e029f6ef8d7ddaf9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://64d17043c5bd9fe7e126416520a376da7a3779ed00b20eb4d36e1651e0e4deb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://99b634350c36056ac94a43bb1050fb0a41c21441966a10fdfe3aeae30cfd0c2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99b634350c36056ac94a43bb1050fb0a41c21441966a10fdfe3aeae30cfd0c2f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T11:48:52Z\\\",\\\"message\\\":\\\"var.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"ClientsAllowCBOR\\\\\\\" enabled=false\\\\nW0122 11:48:51.105406 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 11:48:51.105599 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0122 11:48:51.106804 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1158037108/tls.crt::/tmp/serving-cert-1158037108/tls.key\\\\\\\"\\\\nI0122 11:48:52.103234 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 11:48:52.104987 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 11:48:52.105003 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 11:48:52.105030 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 11:48:52.105035 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 11:48:52.112491 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 11:48:52.112515 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 11:48:52.112520 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 11:48:52.112524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 11:48:52.112528 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 11:48:52.112531 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 11:48:52.112534 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 11:48:52.112540 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 11:48:52.115022 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:48:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fbc73082c8fc6e4c53f063e1d1446fff9c541a208f3ab11d7c687b5b06958ebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8940990b4eeab47177be3a76a9fc4894d28308e94e4c45050915ec740b778a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8940990b4eeab47177be3a76a9fc4894d28308e94e4c45050915ec740b778a41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.353588 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:49:03 crc kubenswrapper[5120]: E0122 11:49:03.353691 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:49:05.353671633 +0000 UTC m=+80.097619974 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.378368 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4822d3cd-955f-493d-a818-acebb52b3602\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://caf1ed97ccb35c8ce9c3321194645452c5875bdadb4b2634d00114c1cedc1056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://91363fceef321ca9f1495cd188f848fae974f94b1b5732adbab842efc578074c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ad731d2d8530eae95dec603d9f7a060ea885c926d453b983464949e2eb4fc2d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d3ad48ffe8f14cdb9c09a6ed7b7da5d4db116a1dac0653103da063524734f466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3ad48ffe8f14cdb9c09a6ed7b7da5d4db116a1dac0653103da063524734f466\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.392275 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.392355 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.392379 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.392410 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.392439 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:03Z","lastTransitionTime":"2026-01-22T11:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.416626 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7027ae84-efaa-474d-9221-28d77dc0af15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fa31f4d5e4e6f36d31ea882d29804b21ad3c620e6f31cf12aec3085ed0f9f9b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0f232b2402a84370f16fcd5fe49fb57391d5d49d1df96442b937914a9ad6ad54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f232b2402a84370f16fcd5fe49fb57391d5d49d1df96442b937914a9ad6ad54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.454596 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dababdca-8afb-452f-865f-54de3aec21d9-metrics-certs\") pod \"network-metrics-daemon-ldwx4\" (UID: \"dababdca-8afb-452f-865f-54de3aec21d9\") " pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:49:03 crc kubenswrapper[5120]: E0122 11:49:03.454716 5120 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 11:49:03 crc kubenswrapper[5120]: E0122 11:49:03.454775 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dababdca-8afb-452f-865f-54de3aec21d9-metrics-certs podName:dababdca-8afb-452f-865f-54de3aec21d9 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:05.454761659 +0000 UTC m=+80.198710000 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dababdca-8afb-452f-865f-54de3aec21d9-metrics-certs") pod "network-metrics-daemon-ldwx4" (UID: "dababdca-8afb-452f-865f-54de3aec21d9") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.460188 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.495235 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.495307 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.495329 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.495356 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.495377 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:03Z","lastTransitionTime":"2026-01-22T11:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.498395 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tf9nb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9f485fd-0793-40a0-abf8-12fd3b612c87\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdqkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tf9nb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.540781 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdb50da0-eb06-4959-b8da-70919924f77e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lt4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lt4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-xzh79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.571159 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.571311 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 11:49:03 crc kubenswrapper[5120]: E0122 11:49:03.571338 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 22 11:49:03 crc kubenswrapper[5120]: E0122 11:49:03.571397 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.571460 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:03 crc kubenswrapper[5120]: E0122 11:49:03.571546 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.571450 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:49:03 crc kubenswrapper[5120]: E0122 11:49:03.571867 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ldwx4" podUID="dababdca-8afb-452f-865f-54de3aec21d9" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.579275 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01080b46-74f1-4191-8755-5152a57b3b25" path="/var/lib/kubelet/pods/01080b46-74f1-4191-8755-5152a57b3b25/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.580479 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09cfa50b-4138-4585-a53e-64dd3ab73335" path="/var/lib/kubelet/pods/09cfa50b-4138-4585-a53e-64dd3ab73335/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.582616 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" path="/var/lib/kubelet/pods/0dd0fbac-8c0d-4228-8faa-abbeedabf7db/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.584583 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.586234 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0effdbcf-dd7d-404d-9d48-77536d665a5d" path="/var/lib/kubelet/pods/0effdbcf-dd7d-404d-9d48-77536d665a5d/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.590719 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="149b3c48-e17c-4a66-a835-d86dabf6ff13" path="/var/lib/kubelet/pods/149b3c48-e17c-4a66-a835-d86dabf6ff13/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.594540 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16bdd140-dce1-464c-ab47-dd5798d1d256" path="/var/lib/kubelet/pods/16bdd140-dce1-464c-ab47-dd5798d1d256/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.596565 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18f80adb-c1c3-49ba-8ee4-932c851d3897" path="/var/lib/kubelet/pods/18f80adb-c1c3-49ba-8ee4-932c851d3897/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.597476 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.597514 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.597524 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.597540 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.597550 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:03Z","lastTransitionTime":"2026-01-22T11:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.598167 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" path="/var/lib/kubelet/pods/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.599043 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2325ffef-9d5b-447f-b00e-3efc429acefe" path="/var/lib/kubelet/pods/2325ffef-9d5b-447f-b00e-3efc429acefe/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.601795 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="301e1965-1754-483d-b6cc-bfae7038bbca" path="/var/lib/kubelet/pods/301e1965-1754-483d-b6cc-bfae7038bbca/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.604840 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31fa8943-81cc-4750-a0b7-0fa9ab5af883" path="/var/lib/kubelet/pods/31fa8943-81cc-4750-a0b7-0fa9ab5af883/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.607246 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42a11a02-47e1-488f-b270-2679d3298b0e" path="/var/lib/kubelet/pods/42a11a02-47e1-488f-b270-2679d3298b0e/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.608157 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="567683bd-0efc-4f21-b076-e28559628404" path="/var/lib/kubelet/pods/567683bd-0efc-4f21-b076-e28559628404/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.610577 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="584e1f4a-8205-47d7-8efb-3afc6017c4c9" path="/var/lib/kubelet/pods/584e1f4a-8205-47d7-8efb-3afc6017c4c9/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.612129 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="593a3561-7760-45c5-8f91-5aaef7475d0f" path="/var/lib/kubelet/pods/593a3561-7760-45c5-8f91-5aaef7475d0f/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.613163 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ebfebf6-3ecd-458e-943f-bb25b52e2718" path="/var/lib/kubelet/pods/5ebfebf6-3ecd-458e-943f-bb25b52e2718/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.614315 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6077b63e-53a2-4f96-9d56-1ce0324e4913" path="/var/lib/kubelet/pods/6077b63e-53a2-4f96-9d56-1ce0324e4913/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.616353 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" path="/var/lib/kubelet/pods/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.617278 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6edfcf45-925b-4eff-b940-95b6fc0b85d4" path="/var/lib/kubelet/pods/6edfcf45-925b-4eff-b940-95b6fc0b85d4/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.618514 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ee8fbd3-1f81-4666-96da-5afc70819f1a" path="/var/lib/kubelet/pods/6ee8fbd3-1f81-4666-96da-5afc70819f1a/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.619932 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" path="/var/lib/kubelet/pods/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.621692 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.621861 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="736c54fe-349c-4bb9-870a-d1c1d1c03831" path="/var/lib/kubelet/pods/736c54fe-349c-4bb9-870a-d1c1d1c03831/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.622594 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7599e0b6-bddf-4def-b7f2-0b32206e8651" path="/var/lib/kubelet/pods/7599e0b6-bddf-4def-b7f2-0b32206e8651/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.623454 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7afa918d-be67-40a6-803c-d3b0ae99d815" path="/var/lib/kubelet/pods/7afa918d-be67-40a6-803c-d3b0ae99d815/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.624646 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7df94c10-441d-4386-93a6-6730fb7bcde0" path="/var/lib/kubelet/pods/7df94c10-441d-4386-93a6-6730fb7bcde0/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.625969 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" path="/var/lib/kubelet/pods/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.626783 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81e39f7b-62e4-4fc9-992a-6535ce127a02" path="/var/lib/kubelet/pods/81e39f7b-62e4-4fc9-992a-6535ce127a02/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.627485 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="869851b9-7ffb-4af0-b166-1d8aa40a5f80" path="/var/lib/kubelet/pods/869851b9-7ffb-4af0-b166-1d8aa40a5f80/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.629788 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" path="/var/lib/kubelet/pods/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.630342 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92dfbade-90b6-4169-8c07-72cff7f2c82b" path="/var/lib/kubelet/pods/92dfbade-90b6-4169-8c07-72cff7f2c82b/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.631750 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94a6e063-3d1a-4d44-875d-185291448c31" path="/var/lib/kubelet/pods/94a6e063-3d1a-4d44-875d-185291448c31/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.632608 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f71a554-e414-4bc3-96d2-674060397afe" path="/var/lib/kubelet/pods/9f71a554-e414-4bc3-96d2-674060397afe/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.634406 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a208c9c2-333b-4b4a-be0d-bc32ec38a821" path="/var/lib/kubelet/pods/a208c9c2-333b-4b4a-be0d-bc32ec38a821/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.635269 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" path="/var/lib/kubelet/pods/a52afe44-fb37-46ed-a1f8-bf39727a3cbe/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.636349 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a555ff2e-0be6-46d5-897d-863bb92ae2b3" path="/var/lib/kubelet/pods/a555ff2e-0be6-46d5-897d-863bb92ae2b3/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.636916 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7a88189-c967-4640-879e-27665747f20c" path="/var/lib/kubelet/pods/a7a88189-c967-4640-879e-27665747f20c/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.637596 5120 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.638117 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.640676 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af41de71-79cf-4590-bbe9-9e8b848862cb" path="/var/lib/kubelet/pods/af41de71-79cf-4590-bbe9-9e8b848862cb/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.642110 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" path="/var/lib/kubelet/pods/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.643023 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4750666-1362-4001-abd0-6f89964cc621" path="/var/lib/kubelet/pods/b4750666-1362-4001-abd0-6f89964cc621/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.644292 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b605f283-6f2e-42da-a838-54421690f7d0" path="/var/lib/kubelet/pods/b605f283-6f2e-42da-a838-54421690f7d0/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.644806 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c491984c-7d4b-44aa-8c1e-d7974424fa47" path="/var/lib/kubelet/pods/c491984c-7d4b-44aa-8c1e-d7974424fa47/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.646142 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5f2bfad-70f6-4185-a3d9-81ce12720767" path="/var/lib/kubelet/pods/c5f2bfad-70f6-4185-a3d9-81ce12720767/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.646807 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc85e424-18b2-4924-920b-bd291a8c4b01" path="/var/lib/kubelet/pods/cc85e424-18b2-4924-920b-bd291a8c4b01/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.647280 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce090a97-9ab6-4c40-a719-64ff2acd9778" path="/var/lib/kubelet/pods/ce090a97-9ab6-4c40-a719-64ff2acd9778/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.648400 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d19cb085-0c5b-4810-b654-ce7923221d90" path="/var/lib/kubelet/pods/d19cb085-0c5b-4810-b654-ce7923221d90/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.649497 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" path="/var/lib/kubelet/pods/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.650731 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d565531a-ff86-4608-9d19-767de01ac31b" path="/var/lib/kubelet/pods/d565531a-ff86-4608-9d19-767de01ac31b/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.651532 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7e8f42f-dc0e-424b-bb56-5ec849834888" path="/var/lib/kubelet/pods/d7e8f42f-dc0e-424b-bb56-5ec849834888/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.652751 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" path="/var/lib/kubelet/pods/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.653614 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e093be35-bb62-4843-b2e8-094545761610" path="/var/lib/kubelet/pods/e093be35-bb62-4843-b2e8-094545761610/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.655014 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" path="/var/lib/kubelet/pods/e1d2a42d-af1d-4054-9618-ab545e0ed8b7/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.656171 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f559dfa3-3917-43a2-97f6-61ddfda10e93" path="/var/lib/kubelet/pods/f559dfa3-3917-43a2-97f6-61ddfda10e93/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.658009 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f65c0ac1-8bca-454d-a2e6-e35cb418beac" path="/var/lib/kubelet/pods/f65c0ac1-8bca-454d-a2e6-e35cb418beac/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.659302 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" path="/var/lib/kubelet/pods/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.659581 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-wrdkl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eaa5719f-fed8-44ac-a759-d2c22d9a2a7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgcrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wrdkl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.660113 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7e2c886-118e-43bb-bef1-c78134de392b" path="/var/lib/kubelet/pods/f7e2c886-118e-43bb-bef1-c78134de392b/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.660893 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" path="/var/lib/kubelet/pods/fc8db2c7-859d-47b3-a900-2bd0c0b2973b/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.699899 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.699947 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.699978 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.699995 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.700009 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:03Z","lastTransitionTime":"2026-01-22T11:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.700356 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.739984 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.778762 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-wrdkl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eaa5719f-fed8-44ac-a759-d2c22d9a2a7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgcrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wrdkl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.801776 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.801984 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.802049 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.802136 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.802215 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:03Z","lastTransitionTime":"2026-01-22T11:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.821770 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rg989" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97df0621-ddba-4462-8134-59bc671c7351\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rg989\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.858816 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-4lzht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz7fj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4lzht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.904128 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.904178 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.904192 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.904208 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.904217 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:03Z","lastTransitionTime":"2026-01-22T11:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.906760 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc6361ac-72d0-485c-938e-c58010f57d78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://4b2fc2ec264e1a2f47ef48ae3682ece70e9bcb0c27191badb3dbb25d763d6ed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://d8530587a7dacf7f1e414d966e228d915e25d07d268990a0cbd418ca534f37e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7d6d0b4ca0fcc7c60a642256079a5ccee5482c56dd372189b46a95401451fa45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d115df90471eae10a65aefb390195da3593e903d0ad1a730847db2d29a63cc7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.949390 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39c0a299-bb61-4f5d-8177-544cd4abe1ad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://032f1b1cf07b4a93c23326f05479f43fba3a3cf6bb4b9f6c3ae29a76050edfe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:49Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://4985527bf2ab9cc933f70f9ea2994a77482f8a24299c8efc8321a3fd5d86a203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:49Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://209c9652e04417a0d9d549aa169eae5834fadfd0f9dca2eb8620fc81f999192a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:49Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7276e56b446c98c69bd713b22bf844b5cae42b8a0d8da7b8fb151efc140381ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:49Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://36ea9f6809070fa9f7f4b7e5c40fae1648814d3b300a273a28c80ea6035f76a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://74e9a1ca4941ec2eb248aac427dc7bbbb75c43b4680680c221c5eaf186b5986b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74e9a1ca4941ec2eb248aac427dc7bbbb75c43b4680680c221c5eaf186b5986b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://028713bf3e9d1dc75729378d49c58defe47bb7fc8dadd99d93e91304cec6cf84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://028713bf3e9d1dc75729378d49c58defe47bb7fc8dadd99d93e91304cec6cf84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://6b3b9f5c7630e7e80fee0c6bceb378b3069a777f25552b1f309325e0a12134ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b3b9f5c7630e7e80fee0c6bceb378b3069a777f25552b1f309325e0a12134ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.979817 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.006118 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.006162 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.006173 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.006186 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.006196 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:04Z","lastTransitionTime":"2026-01-22T11:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.022163 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scbgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scbgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dq269\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.060694 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ldwx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dababdca-8afb-452f-865f-54de3aec21d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kndcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kndcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ldwx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.103664 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.108196 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.108236 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.108247 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.108263 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.108273 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:04Z","lastTransitionTime":"2026-01-22T11:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.141665 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.186118 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2mf7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.209927 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.210044 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.210073 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.210106 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.210129 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:04Z","lastTransitionTime":"2026-01-22T11:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.221883 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410ef417-8c38-4aac-9a75-c1a938b0cf8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://911cf90f454467de717e1f9bb20b825a5be262103e70d8507cf0069f6044f56c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3ebb490a3adef5a0bb92ba36215125157bd696a19543743e029f6ef8d7ddaf9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://64d17043c5bd9fe7e126416520a376da7a3779ed00b20eb4d36e1651e0e4deb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://99b634350c36056ac94a43bb1050fb0a41c21441966a10fdfe3aeae30cfd0c2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99b634350c36056ac94a43bb1050fb0a41c21441966a10fdfe3aeae30cfd0c2f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T11:48:52Z\\\",\\\"message\\\":\\\"var.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"ClientsAllowCBOR\\\\\\\" enabled=false\\\\nW0122 11:48:51.105406 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 11:48:51.105599 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0122 11:48:51.106804 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1158037108/tls.crt::/tmp/serving-cert-1158037108/tls.key\\\\\\\"\\\\nI0122 11:48:52.103234 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 11:48:52.104987 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 11:48:52.105003 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 11:48:52.105030 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 11:48:52.105035 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 11:48:52.112491 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 11:48:52.112515 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 11:48:52.112520 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 11:48:52.112524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 11:48:52.112528 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 11:48:52.112531 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 11:48:52.112534 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 11:48:52.112540 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 11:48:52.115022 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:48:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fbc73082c8fc6e4c53f063e1d1446fff9c541a208f3ab11d7c687b5b06958ebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8940990b4eeab47177be3a76a9fc4894d28308e94e4c45050915ec740b778a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8940990b4eeab47177be3a76a9fc4894d28308e94e4c45050915ec740b778a41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.263225 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4822d3cd-955f-493d-a818-acebb52b3602\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://caf1ed97ccb35c8ce9c3321194645452c5875bdadb4b2634d00114c1cedc1056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://91363fceef321ca9f1495cd188f848fae974f94b1b5732adbab842efc578074c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ad731d2d8530eae95dec603d9f7a060ea885c926d453b983464949e2eb4fc2d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d3ad48ffe8f14cdb9c09a6ed7b7da5d4db116a1dac0653103da063524734f466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3ad48ffe8f14cdb9c09a6ed7b7da5d4db116a1dac0653103da063524734f466\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.297273 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7027ae84-efaa-474d-9221-28d77dc0af15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fa31f4d5e4e6f36d31ea882d29804b21ad3c620e6f31cf12aec3085ed0f9f9b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0f232b2402a84370f16fcd5fe49fb57391d5d49d1df96442b937914a9ad6ad54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f232b2402a84370f16fcd5fe49fb57391d5d49d1df96442b937914a9ad6ad54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.312048 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.312097 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.312106 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.312120 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.312131 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:04Z","lastTransitionTime":"2026-01-22T11:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.340330 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.376731 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tf9nb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9f485fd-0793-40a0-abf8-12fd3b612c87\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdqkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tf9nb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.414473 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.414518 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.414530 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.414545 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.414556 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:04Z","lastTransitionTime":"2026-01-22T11:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.418947 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdb50da0-eb06-4959-b8da-70919924f77e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lt4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lt4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-xzh79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.516792 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.516851 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.516866 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.516886 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.516941 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:04Z","lastTransitionTime":"2026-01-22T11:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.620467 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.620756 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.620994 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.621112 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.621217 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:04Z","lastTransitionTime":"2026-01-22T11:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.724095 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.724159 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.724170 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.724183 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.724211 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:04Z","lastTransitionTime":"2026-01-22T11:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.826721 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.826756 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.826764 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.826777 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.826786 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:04Z","lastTransitionTime":"2026-01-22T11:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.929005 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.929236 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.929299 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.929404 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.929495 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:04Z","lastTransitionTime":"2026-01-22T11:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.031727 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.031768 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.031777 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.031790 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.031800 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:05Z","lastTransitionTime":"2026-01-22T11:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.133276 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.133321 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.133345 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.133360 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.133369 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:05Z","lastTransitionTime":"2026-01-22T11:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.234719 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.234818 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.234832 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.234850 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.234862 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:05Z","lastTransitionTime":"2026-01-22T11:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.274445 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.274495 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.274520 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.274559 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:05 crc kubenswrapper[5120]: E0122 11:49:05.274651 5120 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 11:49:05 crc kubenswrapper[5120]: E0122 11:49:05.274703 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:09.274689176 +0000 UTC m=+84.018637517 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 11:49:05 crc kubenswrapper[5120]: E0122 11:49:05.274723 5120 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 11:49:05 crc kubenswrapper[5120]: E0122 11:49:05.274766 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 11:49:05 crc kubenswrapper[5120]: E0122 11:49:05.274796 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 11:49:05 crc kubenswrapper[5120]: E0122 11:49:05.274806 5120 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:49:05 crc kubenswrapper[5120]: E0122 11:49:05.274824 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:09.274800098 +0000 UTC m=+84.018748439 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 11:49:05 crc kubenswrapper[5120]: E0122 11:49:05.274900 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:09.27488227 +0000 UTC m=+84.018830611 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:49:05 crc kubenswrapper[5120]: E0122 11:49:05.274745 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 11:49:05 crc kubenswrapper[5120]: E0122 11:49:05.274915 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 11:49:05 crc kubenswrapper[5120]: E0122 11:49:05.274921 5120 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:49:05 crc kubenswrapper[5120]: E0122 11:49:05.275002 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:09.274948592 +0000 UTC m=+84.018896933 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.338033 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.338079 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.338091 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.338111 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.338123 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:05Z","lastTransitionTime":"2026-01-22T11:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.375234 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:49:05 crc kubenswrapper[5120]: E0122 11:49:05.375370 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:49:09.375347243 +0000 UTC m=+84.119295594 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.441015 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.441079 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.441096 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.441118 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.441136 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:05Z","lastTransitionTime":"2026-01-22T11:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.459354 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.459415 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.459429 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.459448 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.459462 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:05Z","lastTransitionTime":"2026-01-22T11:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:05 crc kubenswrapper[5120]: E0122 11:49:05.471044 5120 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"60403ab6-2e1e-4736-9a34-cfc7e1924d0b\\\",\\\"systemUUID\\\":\\\"382cdad4-0171-4b64-8e1b-b8f3f02e6a19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.474594 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.474638 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.474650 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.474667 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.474681 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:05Z","lastTransitionTime":"2026-01-22T11:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.476237 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dababdca-8afb-452f-865f-54de3aec21d9-metrics-certs\") pod \"network-metrics-daemon-ldwx4\" (UID: \"dababdca-8afb-452f-865f-54de3aec21d9\") " pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:49:05 crc kubenswrapper[5120]: E0122 11:49:05.476353 5120 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 11:49:05 crc kubenswrapper[5120]: E0122 11:49:05.476408 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dababdca-8afb-452f-865f-54de3aec21d9-metrics-certs podName:dababdca-8afb-452f-865f-54de3aec21d9 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:09.476388989 +0000 UTC m=+84.220337340 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dababdca-8afb-452f-865f-54de3aec21d9-metrics-certs") pod "network-metrics-daemon-ldwx4" (UID: "dababdca-8afb-452f-865f-54de3aec21d9") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 11:49:05 crc kubenswrapper[5120]: E0122 11:49:05.486718 5120 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"60403ab6-2e1e-4736-9a34-cfc7e1924d0b\\\",\\\"systemUUID\\\":\\\"382cdad4-0171-4b64-8e1b-b8f3f02e6a19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.490208 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.490241 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.490250 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.490263 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.490273 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:05Z","lastTransitionTime":"2026-01-22T11:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:05 crc kubenswrapper[5120]: E0122 11:49:05.506310 5120 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"60403ab6-2e1e-4736-9a34-cfc7e1924d0b\\\",\\\"systemUUID\\\":\\\"382cdad4-0171-4b64-8e1b-b8f3f02e6a19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.510481 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.510508 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.510517 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.510529 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.510539 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:05Z","lastTransitionTime":"2026-01-22T11:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:05 crc kubenswrapper[5120]: E0122 11:49:05.524485 5120 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"60403ab6-2e1e-4736-9a34-cfc7e1924d0b\\\",\\\"systemUUID\\\":\\\"382cdad4-0171-4b64-8e1b-b8f3f02e6a19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.529327 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.529420 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.529450 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.529490 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.529521 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:05Z","lastTransitionTime":"2026-01-22T11:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:05 crc kubenswrapper[5120]: E0122 11:49:05.542863 5120 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"60403ab6-2e1e-4736-9a34-cfc7e1924d0b\\\",\\\"systemUUID\\\":\\\"382cdad4-0171-4b64-8e1b-b8f3f02e6a19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:05 crc kubenswrapper[5120]: E0122 11:49:05.543171 5120 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.545620 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.545682 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.545703 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.545730 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.545752 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:05Z","lastTransitionTime":"2026-01-22T11:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.571203 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 11:49:05 crc kubenswrapper[5120]: E0122 11:49:05.571392 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.571211 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.571427 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.571203 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:05 crc kubenswrapper[5120]: E0122 11:49:05.571535 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ldwx4" podUID="dababdca-8afb-452f-865f-54de3aec21d9" Jan 22 11:49:05 crc kubenswrapper[5120]: E0122 11:49:05.571683 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 22 11:49:05 crc kubenswrapper[5120]: E0122 11:49:05.571798 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.586470 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rg989" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97df0621-ddba-4462-8134-59bc671c7351\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rg989\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.596750 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-4lzht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz7fj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4lzht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.609670 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc6361ac-72d0-485c-938e-c58010f57d78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://4b2fc2ec264e1a2f47ef48ae3682ece70e9bcb0c27191badb3dbb25d763d6ed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://d8530587a7dacf7f1e414d966e228d915e25d07d268990a0cbd418ca534f37e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7d6d0b4ca0fcc7c60a642256079a5ccee5482c56dd372189b46a95401451fa45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d115df90471eae10a65aefb390195da3593e903d0ad1a730847db2d29a63cc7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.632227 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39c0a299-bb61-4f5d-8177-544cd4abe1ad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://032f1b1cf07b4a93c23326f05479f43fba3a3cf6bb4b9f6c3ae29a76050edfe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:49Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://4985527bf2ab9cc933f70f9ea2994a77482f8a24299c8efc8321a3fd5d86a203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:49Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://209c9652e04417a0d9d549aa169eae5834fadfd0f9dca2eb8620fc81f999192a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:49Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7276e56b446c98c69bd713b22bf844b5cae42b8a0d8da7b8fb151efc140381ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:49Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://36ea9f6809070fa9f7f4b7e5c40fae1648814d3b300a273a28c80ea6035f76a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://74e9a1ca4941ec2eb248aac427dc7bbbb75c43b4680680c221c5eaf186b5986b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74e9a1ca4941ec2eb248aac427dc7bbbb75c43b4680680c221c5eaf186b5986b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://028713bf3e9d1dc75729378d49c58defe47bb7fc8dadd99d93e91304cec6cf84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://028713bf3e9d1dc75729378d49c58defe47bb7fc8dadd99d93e91304cec6cf84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://6b3b9f5c7630e7e80fee0c6bceb378b3069a777f25552b1f309325e0a12134ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b3b9f5c7630e7e80fee0c6bceb378b3069a777f25552b1f309325e0a12134ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.644913 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.648110 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.648137 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.648150 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.648164 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.648172 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:05Z","lastTransitionTime":"2026-01-22T11:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.655169 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scbgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scbgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dq269\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.663752 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ldwx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dababdca-8afb-452f-865f-54de3aec21d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kndcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kndcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ldwx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.677803 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.690420 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.713183 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2mf7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.731030 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410ef417-8c38-4aac-9a75-c1a938b0cf8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://911cf90f454467de717e1f9bb20b825a5be262103e70d8507cf0069f6044f56c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3ebb490a3adef5a0bb92ba36215125157bd696a19543743e029f6ef8d7ddaf9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://64d17043c5bd9fe7e126416520a376da7a3779ed00b20eb4d36e1651e0e4deb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://99b634350c36056ac94a43bb1050fb0a41c21441966a10fdfe3aeae30cfd0c2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99b634350c36056ac94a43bb1050fb0a41c21441966a10fdfe3aeae30cfd0c2f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T11:48:52Z\\\",\\\"message\\\":\\\"var.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"ClientsAllowCBOR\\\\\\\" enabled=false\\\\nW0122 11:48:51.105406 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 11:48:51.105599 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0122 11:48:51.106804 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1158037108/tls.crt::/tmp/serving-cert-1158037108/tls.key\\\\\\\"\\\\nI0122 11:48:52.103234 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 11:48:52.104987 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 11:48:52.105003 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 11:48:52.105030 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 11:48:52.105035 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 11:48:52.112491 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 11:48:52.112515 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 11:48:52.112520 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 11:48:52.112524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 11:48:52.112528 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 11:48:52.112531 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 11:48:52.112534 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 11:48:52.112540 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 11:48:52.115022 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:48:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fbc73082c8fc6e4c53f063e1d1446fff9c541a208f3ab11d7c687b5b06958ebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8940990b4eeab47177be3a76a9fc4894d28308e94e4c45050915ec740b778a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8940990b4eeab47177be3a76a9fc4894d28308e94e4c45050915ec740b778a41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.744765 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4822d3cd-955f-493d-a818-acebb52b3602\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://caf1ed97ccb35c8ce9c3321194645452c5875bdadb4b2634d00114c1cedc1056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://91363fceef321ca9f1495cd188f848fae974f94b1b5732adbab842efc578074c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ad731d2d8530eae95dec603d9f7a060ea885c926d453b983464949e2eb4fc2d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d3ad48ffe8f14cdb9c09a6ed7b7da5d4db116a1dac0653103da063524734f466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3ad48ffe8f14cdb9c09a6ed7b7da5d4db116a1dac0653103da063524734f466\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.749839 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.750063 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.750159 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.750247 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.750399 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:05Z","lastTransitionTime":"2026-01-22T11:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.754450 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7027ae84-efaa-474d-9221-28d77dc0af15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fa31f4d5e4e6f36d31ea882d29804b21ad3c620e6f31cf12aec3085ed0f9f9b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0f232b2402a84370f16fcd5fe49fb57391d5d49d1df96442b937914a9ad6ad54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f232b2402a84370f16fcd5fe49fb57391d5d49d1df96442b937914a9ad6ad54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.765303 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.773217 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tf9nb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9f485fd-0793-40a0-abf8-12fd3b612c87\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdqkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tf9nb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.781493 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdb50da0-eb06-4959-b8da-70919924f77e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lt4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lt4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-xzh79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.792526 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.801380 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.809148 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-wrdkl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eaa5719f-fed8-44ac-a759-d2c22d9a2a7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgcrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wrdkl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.854157 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.854240 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.854267 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.854305 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.854327 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:05Z","lastTransitionTime":"2026-01-22T11:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.958552 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.958610 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.958656 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.958684 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.958699 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:05Z","lastTransitionTime":"2026-01-22T11:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.061151 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.061241 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.061257 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.061277 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.061290 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:06Z","lastTransitionTime":"2026-01-22T11:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.164098 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.164256 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.164281 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.164311 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.164333 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:06Z","lastTransitionTime":"2026-01-22T11:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.267440 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.267548 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.267580 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.267620 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.267649 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:06Z","lastTransitionTime":"2026-01-22T11:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.371312 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.371410 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.371433 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.371460 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.371475 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:06Z","lastTransitionTime":"2026-01-22T11:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.474324 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.474460 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.474487 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.474525 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.474549 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:06Z","lastTransitionTime":"2026-01-22T11:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.578161 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.578217 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.578231 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.578250 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.578262 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:06Z","lastTransitionTime":"2026-01-22T11:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.681350 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.681421 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.681436 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.681480 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.681495 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:06Z","lastTransitionTime":"2026-01-22T11:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.783269 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.783318 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.783333 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.783352 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.783364 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:06Z","lastTransitionTime":"2026-01-22T11:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.886764 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.886859 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.886880 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.886914 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.886936 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:06Z","lastTransitionTime":"2026-01-22T11:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.989531 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.989585 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.989596 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.989614 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.989625 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:06Z","lastTransitionTime":"2026-01-22T11:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.092344 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.092407 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.092425 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.092446 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.092461 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:07Z","lastTransitionTime":"2026-01-22T11:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.195504 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.195588 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.195612 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.195725 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.195754 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:07Z","lastTransitionTime":"2026-01-22T11:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.299403 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.299495 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.299524 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.299563 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.299630 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:07Z","lastTransitionTime":"2026-01-22T11:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.402388 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.402447 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.402465 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.402486 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.402500 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:07Z","lastTransitionTime":"2026-01-22T11:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.505701 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.505810 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.505843 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.505873 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.505892 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:07Z","lastTransitionTime":"2026-01-22T11:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.571034 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.571049 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:07 crc kubenswrapper[5120]: E0122 11:49:07.571298 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.571297 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.571578 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:49:07 crc kubenswrapper[5120]: E0122 11:49:07.571849 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 22 11:49:07 crc kubenswrapper[5120]: E0122 11:49:07.572013 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ldwx4" podUID="dababdca-8afb-452f-865f-54de3aec21d9" Jan 22 11:49:07 crc kubenswrapper[5120]: E0122 11:49:07.572121 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.608756 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.608849 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.608874 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.608902 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.608920 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:07Z","lastTransitionTime":"2026-01-22T11:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.711451 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.711507 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.711531 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.711551 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.711564 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:07Z","lastTransitionTime":"2026-01-22T11:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.814541 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.814624 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.814653 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.814688 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.814714 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:07Z","lastTransitionTime":"2026-01-22T11:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.918313 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.918381 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.918411 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.918436 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.918450 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:07Z","lastTransitionTime":"2026-01-22T11:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.020923 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.021067 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.021092 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.021122 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.021150 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:08Z","lastTransitionTime":"2026-01-22T11:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.123949 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.124027 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.124040 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.124061 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.124075 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:08Z","lastTransitionTime":"2026-01-22T11:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.226815 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.226901 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.226928 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.226999 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.227027 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:08Z","lastTransitionTime":"2026-01-22T11:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.330148 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.330229 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.330251 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.330279 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.330307 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:08Z","lastTransitionTime":"2026-01-22T11:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.433781 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.433866 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.433887 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.433914 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.433931 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:08Z","lastTransitionTime":"2026-01-22T11:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.536772 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.536814 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.536826 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.536840 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.536851 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:08Z","lastTransitionTime":"2026-01-22T11:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.639948 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.640014 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.640025 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.640042 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.640053 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:08Z","lastTransitionTime":"2026-01-22T11:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.742242 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.742302 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.742314 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.742338 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.742351 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:08Z","lastTransitionTime":"2026-01-22T11:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.846126 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.846174 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.846184 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.846201 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.846215 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:08Z","lastTransitionTime":"2026-01-22T11:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.906844 5120 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.948949 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.949040 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.949056 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.949081 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.949101 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:08Z","lastTransitionTime":"2026-01-22T11:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.053127 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.053180 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.053192 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.053209 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.053222 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:09Z","lastTransitionTime":"2026-01-22T11:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.155309 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.155356 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.155366 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.155384 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.155394 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:09Z","lastTransitionTime":"2026-01-22T11:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.258319 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.258410 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.258424 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.258446 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.258461 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:09Z","lastTransitionTime":"2026-01-22T11:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.328326 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.328372 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.328399 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.328421 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:09 crc kubenswrapper[5120]: E0122 11:49:09.328479 5120 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 11:49:09 crc kubenswrapper[5120]: E0122 11:49:09.328480 5120 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 11:49:09 crc kubenswrapper[5120]: E0122 11:49:09.328529 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:17.328515793 +0000 UTC m=+92.072464134 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 11:49:09 crc kubenswrapper[5120]: E0122 11:49:09.328527 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 11:49:09 crc kubenswrapper[5120]: E0122 11:49:09.328541 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:17.328535853 +0000 UTC m=+92.072484194 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 11:49:09 crc kubenswrapper[5120]: E0122 11:49:09.328550 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 11:49:09 crc kubenswrapper[5120]: E0122 11:49:09.328560 5120 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:49:09 crc kubenswrapper[5120]: E0122 11:49:09.328590 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:17.328581304 +0000 UTC m=+92.072529645 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:49:09 crc kubenswrapper[5120]: E0122 11:49:09.328767 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 11:49:09 crc kubenswrapper[5120]: E0122 11:49:09.328823 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 11:49:09 crc kubenswrapper[5120]: E0122 11:49:09.328838 5120 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:49:09 crc kubenswrapper[5120]: E0122 11:49:09.328946 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:17.328919243 +0000 UTC m=+92.072867584 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.361497 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.361684 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.361712 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.361747 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.361801 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:09Z","lastTransitionTime":"2026-01-22T11:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.429922 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:49:09 crc kubenswrapper[5120]: E0122 11:49:09.430271 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:49:17.430212815 +0000 UTC m=+92.174161166 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.432771 5120 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.464304 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.464352 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.464362 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.464377 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.464387 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:09Z","lastTransitionTime":"2026-01-22T11:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.531615 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dababdca-8afb-452f-865f-54de3aec21d9-metrics-certs\") pod \"network-metrics-daemon-ldwx4\" (UID: \"dababdca-8afb-452f-865f-54de3aec21d9\") " pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:49:09 crc kubenswrapper[5120]: E0122 11:49:09.531940 5120 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 11:49:09 crc kubenswrapper[5120]: E0122 11:49:09.532111 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dababdca-8afb-452f-865f-54de3aec21d9-metrics-certs podName:dababdca-8afb-452f-865f-54de3aec21d9 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:17.532074111 +0000 UTC m=+92.276022472 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dababdca-8afb-452f-865f-54de3aec21d9-metrics-certs") pod "network-metrics-daemon-ldwx4" (UID: "dababdca-8afb-452f-865f-54de3aec21d9") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.566739 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.566800 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.566833 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.566857 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.566875 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:09Z","lastTransitionTime":"2026-01-22T11:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.571373 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.571416 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.571424 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:49:09 crc kubenswrapper[5120]: E0122 11:49:09.571554 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 22 11:49:09 crc kubenswrapper[5120]: E0122 11:49:09.571931 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ldwx4" podUID="dababdca-8afb-452f-865f-54de3aec21d9" Jan 22 11:49:09 crc kubenswrapper[5120]: E0122 11:49:09.572085 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.572173 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:09 crc kubenswrapper[5120]: E0122 11:49:09.572402 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.669838 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.669890 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.669903 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.669922 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.669935 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:09Z","lastTransitionTime":"2026-01-22T11:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.772617 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.772708 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.772736 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.772768 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.772792 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:09Z","lastTransitionTime":"2026-01-22T11:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.877496 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.877621 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.877652 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.877694 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.877739 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:09Z","lastTransitionTime":"2026-01-22T11:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.982507 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.982605 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.982633 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.982665 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.982685 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:09Z","lastTransitionTime":"2026-01-22T11:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.085167 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.085228 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.085240 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.085260 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.085272 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:10Z","lastTransitionTime":"2026-01-22T11:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.188724 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.188813 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.188849 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.188905 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.188935 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:10Z","lastTransitionTime":"2026-01-22T11:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.291911 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.292038 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.292059 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.292086 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.292109 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:10Z","lastTransitionTime":"2026-01-22T11:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.395051 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.395134 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.395155 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.395182 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.395199 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:10Z","lastTransitionTime":"2026-01-22T11:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.498540 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.498619 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.498638 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.498666 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.498685 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:10Z","lastTransitionTime":"2026-01-22T11:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.601807 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.601896 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.601923 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.601992 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.602052 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:10Z","lastTransitionTime":"2026-01-22T11:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.705611 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.705734 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.705755 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.705782 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.705802 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:10Z","lastTransitionTime":"2026-01-22T11:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.808879 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.809025 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.809069 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.809202 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.809241 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:10Z","lastTransitionTime":"2026-01-22T11:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.912338 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.912403 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.912417 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.912436 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.912450 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:10Z","lastTransitionTime":"2026-01-22T11:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.015687 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.015769 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.015783 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.015806 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.015822 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:11Z","lastTransitionTime":"2026-01-22T11:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.118854 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.118926 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.118946 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.119001 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.119021 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:11Z","lastTransitionTime":"2026-01-22T11:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.222742 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.222840 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.222854 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.222874 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.222892 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:11Z","lastTransitionTime":"2026-01-22T11:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.325722 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.325777 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.325787 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.325802 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.325814 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:11Z","lastTransitionTime":"2026-01-22T11:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.429391 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.429487 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.429507 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.429541 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.429565 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:11Z","lastTransitionTime":"2026-01-22T11:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.533095 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.533203 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.533224 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.533257 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.533278 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:11Z","lastTransitionTime":"2026-01-22T11:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.571330 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.571420 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.571351 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.571605 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:49:11 crc kubenswrapper[5120]: E0122 11:49:11.571784 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 22 11:49:11 crc kubenswrapper[5120]: E0122 11:49:11.572026 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 22 11:49:11 crc kubenswrapper[5120]: E0122 11:49:11.572282 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ldwx4" podUID="dababdca-8afb-452f-865f-54de3aec21d9" Jan 22 11:49:11 crc kubenswrapper[5120]: E0122 11:49:11.571750 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.635670 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.635726 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.635738 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.635754 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.635765 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:11Z","lastTransitionTime":"2026-01-22T11:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.739259 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.739357 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.739379 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.739414 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.739441 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:11Z","lastTransitionTime":"2026-01-22T11:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.842892 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.842967 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.842980 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.843001 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.843018 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:11Z","lastTransitionTime":"2026-01-22T11:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.945588 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.945709 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.945729 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.945746 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.945792 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:11Z","lastTransitionTime":"2026-01-22T11:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.049130 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.049197 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.049209 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.049226 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.049237 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:12Z","lastTransitionTime":"2026-01-22T11:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.151355 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.151413 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.151442 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.151465 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.151478 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:12Z","lastTransitionTime":"2026-01-22T11:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.254417 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.254521 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.254551 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.254587 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.254611 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:12Z","lastTransitionTime":"2026-01-22T11:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.357140 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.357286 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.357306 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.357336 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.357359 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:12Z","lastTransitionTime":"2026-01-22T11:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.461167 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.461256 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.461280 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.461309 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.461332 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:12Z","lastTransitionTime":"2026-01-22T11:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.564704 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.565029 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.565054 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.565101 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.565111 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:12Z","lastTransitionTime":"2026-01-22T11:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.667859 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.667943 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.668002 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.668048 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.668069 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:12Z","lastTransitionTime":"2026-01-22T11:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.770394 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.770494 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.770517 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.770549 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.770575 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:12Z","lastTransitionTime":"2026-01-22T11:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.872572 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.872629 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.872639 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.872652 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.872661 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:12Z","lastTransitionTime":"2026-01-22T11:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.974861 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.974946 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.975011 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.975042 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.975075 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:12Z","lastTransitionTime":"2026-01-22T11:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.077702 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.077811 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.077838 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.077873 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.077894 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:13Z","lastTransitionTime":"2026-01-22T11:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.180411 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.180467 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.180481 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.180532 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.180547 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:13Z","lastTransitionTime":"2026-01-22T11:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.283675 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.283770 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.283795 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.283847 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.283872 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:13Z","lastTransitionTime":"2026-01-22T11:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.386811 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.386905 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.386932 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.387021 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.387066 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:13Z","lastTransitionTime":"2026-01-22T11:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.490148 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.490234 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.490256 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.490282 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.490300 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:13Z","lastTransitionTime":"2026-01-22T11:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.571392 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.571455 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 11:49:13 crc kubenswrapper[5120]: E0122 11:49:13.571595 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.571895 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:13 crc kubenswrapper[5120]: E0122 11:49:13.572016 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 22 11:49:13 crc kubenswrapper[5120]: E0122 11:49:13.572196 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.572240 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:49:13 crc kubenswrapper[5120]: E0122 11:49:13.572294 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ldwx4" podUID="dababdca-8afb-452f-865f-54de3aec21d9" Jan 22 11:49:13 crc kubenswrapper[5120]: E0122 11:49:13.574338 5120 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 22 11:49:13 crc kubenswrapper[5120]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 22 11:49:13 crc kubenswrapper[5120]: if [[ -f "/env/_master" ]]; then Jan 22 11:49:13 crc kubenswrapper[5120]: set -o allexport Jan 22 11:49:13 crc kubenswrapper[5120]: source "/env/_master" Jan 22 11:49:13 crc kubenswrapper[5120]: set +o allexport Jan 22 11:49:13 crc kubenswrapper[5120]: fi Jan 22 11:49:13 crc kubenswrapper[5120]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Jan 22 11:49:13 crc kubenswrapper[5120]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Jan 22 11:49:13 crc kubenswrapper[5120]: ho_enable="--enable-hybrid-overlay" Jan 22 11:49:13 crc kubenswrapper[5120]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Jan 22 11:49:13 crc kubenswrapper[5120]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Jan 22 11:49:13 crc kubenswrapper[5120]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Jan 22 11:49:13 crc kubenswrapper[5120]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 22 11:49:13 crc kubenswrapper[5120]: --webhook-cert-dir="/etc/webhook-cert" \ Jan 22 11:49:13 crc kubenswrapper[5120]: --webhook-host=127.0.0.1 \ Jan 22 11:49:13 crc kubenswrapper[5120]: --webhook-port=9743 \ Jan 22 11:49:13 crc kubenswrapper[5120]: ${ho_enable} \ Jan 22 11:49:13 crc kubenswrapper[5120]: --enable-interconnect \ Jan 22 11:49:13 crc kubenswrapper[5120]: --disable-approver \ Jan 22 11:49:13 crc kubenswrapper[5120]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Jan 22 11:49:13 crc kubenswrapper[5120]: --wait-for-kubernetes-api=200s \ Jan 22 11:49:13 crc kubenswrapper[5120]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Jan 22 11:49:13 crc kubenswrapper[5120]: --loglevel="${LOGLEVEL}" Jan 22 11:49:13 crc kubenswrapper[5120]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 22 11:49:13 crc kubenswrapper[5120]: > logger="UnhandledError" Jan 22 11:49:13 crc kubenswrapper[5120]: E0122 11:49:13.574453 5120 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-scbgq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 22 11:49:13 crc kubenswrapper[5120]: E0122 11:49:13.577293 5120 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-scbgq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 22 11:49:13 crc kubenswrapper[5120]: E0122 11:49:13.578064 5120 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 22 11:49:13 crc kubenswrapper[5120]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 22 11:49:13 crc kubenswrapper[5120]: if [[ -f "/env/_master" ]]; then Jan 22 11:49:13 crc kubenswrapper[5120]: set -o allexport Jan 22 11:49:13 crc kubenswrapper[5120]: source "/env/_master" Jan 22 11:49:13 crc kubenswrapper[5120]: set +o allexport Jan 22 11:49:13 crc kubenswrapper[5120]: fi Jan 22 11:49:13 crc kubenswrapper[5120]: Jan 22 11:49:13 crc kubenswrapper[5120]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Jan 22 11:49:13 crc kubenswrapper[5120]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 22 11:49:13 crc kubenswrapper[5120]: --disable-webhook \ Jan 22 11:49:13 crc kubenswrapper[5120]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Jan 22 11:49:13 crc kubenswrapper[5120]: --loglevel="${LOGLEVEL}" Jan 22 11:49:13 crc kubenswrapper[5120]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 22 11:49:13 crc kubenswrapper[5120]: > logger="UnhandledError" Jan 22 11:49:13 crc kubenswrapper[5120]: E0122 11:49:13.578436 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 11:49:13 crc kubenswrapper[5120]: E0122 11:49:13.579266 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.592423 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.592491 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.592504 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.592524 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.592568 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:13Z","lastTransitionTime":"2026-01-22T11:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.694659 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.694744 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.694758 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.694801 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.694817 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:13Z","lastTransitionTime":"2026-01-22T11:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.797941 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.798063 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.798084 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.798112 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.798133 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:13Z","lastTransitionTime":"2026-01-22T11:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.900774 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.900850 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.900874 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.900903 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.900925 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:13Z","lastTransitionTime":"2026-01-22T11:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.004034 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.004136 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.004160 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.004197 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.004224 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:14Z","lastTransitionTime":"2026-01-22T11:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.109680 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.109805 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.109833 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.109876 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.109912 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:14Z","lastTransitionTime":"2026-01-22T11:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.213102 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.213197 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.213220 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.213248 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.213268 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:14Z","lastTransitionTime":"2026-01-22T11:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.315331 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.315391 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.315404 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.315422 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.315433 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:14Z","lastTransitionTime":"2026-01-22T11:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.418329 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.418580 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.418601 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.418626 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.418646 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:14Z","lastTransitionTime":"2026-01-22T11:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.520663 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.520725 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.520746 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.520770 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.520787 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:14Z","lastTransitionTime":"2026-01-22T11:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.623407 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.623464 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.623476 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.623497 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.623509 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:14Z","lastTransitionTime":"2026-01-22T11:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.725708 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.725767 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.725784 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.725802 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.725812 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:14Z","lastTransitionTime":"2026-01-22T11:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.827607 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.827652 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.827663 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.827677 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.827722 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:14Z","lastTransitionTime":"2026-01-22T11:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.929919 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.929980 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.929989 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.930001 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.930010 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:14Z","lastTransitionTime":"2026-01-22T11:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.032401 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.032497 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.032528 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.032573 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.032601 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:15Z","lastTransitionTime":"2026-01-22T11:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.135178 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.135215 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.135224 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.135238 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.135248 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:15Z","lastTransitionTime":"2026-01-22T11:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.237827 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.237867 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.237876 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.237890 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.237899 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:15Z","lastTransitionTime":"2026-01-22T11:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.340228 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.340306 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.340328 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.340357 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.340377 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:15Z","lastTransitionTime":"2026-01-22T11:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.442362 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.442407 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.442416 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.442433 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.442445 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:15Z","lastTransitionTime":"2026-01-22T11:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.545188 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.545264 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.545289 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.545319 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.545337 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:15Z","lastTransitionTime":"2026-01-22T11:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.571071 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.571083 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 11:49:15 crc kubenswrapper[5120]: E0122 11:49:15.571310 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 22 11:49:15 crc kubenswrapper[5120]: E0122 11:49:15.571766 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.571794 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.571776 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:15 crc kubenswrapper[5120]: E0122 11:49:15.571898 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ldwx4" podUID="dababdca-8afb-452f-865f-54de3aec21d9" Jan 22 11:49:15 crc kubenswrapper[5120]: E0122 11:49:15.571942 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 22 11:49:15 crc kubenswrapper[5120]: E0122 11:49:15.573389 5120 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 22 11:49:15 crc kubenswrapper[5120]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Jan 22 11:49:15 crc kubenswrapper[5120]: while [ true ]; Jan 22 11:49:15 crc kubenswrapper[5120]: do Jan 22 11:49:15 crc kubenswrapper[5120]: for f in $(ls /tmp/serviceca); do Jan 22 11:49:15 crc kubenswrapper[5120]: echo $f Jan 22 11:49:15 crc kubenswrapper[5120]: ca_file_path="/tmp/serviceca/${f}" Jan 22 11:49:15 crc kubenswrapper[5120]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Jan 22 11:49:15 crc kubenswrapper[5120]: reg_dir_path="/etc/docker/certs.d/${f}" Jan 22 11:49:15 crc kubenswrapper[5120]: if [ -e "${reg_dir_path}" ]; then Jan 22 11:49:15 crc kubenswrapper[5120]: cp -u $ca_file_path $reg_dir_path/ca.crt Jan 22 11:49:15 crc kubenswrapper[5120]: else Jan 22 11:49:15 crc kubenswrapper[5120]: mkdir $reg_dir_path Jan 22 11:49:15 crc kubenswrapper[5120]: cp $ca_file_path $reg_dir_path/ca.crt Jan 22 11:49:15 crc kubenswrapper[5120]: fi Jan 22 11:49:15 crc kubenswrapper[5120]: done Jan 22 11:49:15 crc kubenswrapper[5120]: for d in $(ls /etc/docker/certs.d); do Jan 22 11:49:15 crc kubenswrapper[5120]: echo $d Jan 22 11:49:15 crc kubenswrapper[5120]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Jan 22 11:49:15 crc kubenswrapper[5120]: reg_conf_path="/tmp/serviceca/${dp}" Jan 22 11:49:15 crc kubenswrapper[5120]: if [ ! -e "${reg_conf_path}" ]; then Jan 22 11:49:15 crc kubenswrapper[5120]: rm -rf /etc/docker/certs.d/$d Jan 22 11:49:15 crc kubenswrapper[5120]: fi Jan 22 11:49:15 crc kubenswrapper[5120]: done Jan 22 11:49:15 crc kubenswrapper[5120]: sleep 60 & wait ${!} Jan 22 11:49:15 crc kubenswrapper[5120]: done Jan 22 11:49:15 crc kubenswrapper[5120]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wdqkj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-tf9nb_openshift-image-registry(f9f485fd-0793-40a0-abf8-12fd3b612c87): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 22 11:49:15 crc kubenswrapper[5120]: > logger="UnhandledError" Jan 22 11:49:15 crc kubenswrapper[5120]: E0122 11:49:15.574097 5120 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cs4xp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-rg989_openshift-multus(97df0621-ddba-4462-8134-59bc671c7351): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 22 11:49:15 crc kubenswrapper[5120]: E0122 11:49:15.574747 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-tf9nb" podUID="f9f485fd-0793-40a0-abf8-12fd3b612c87" Jan 22 11:49:15 crc kubenswrapper[5120]: E0122 11:49:15.575717 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-rg989" podUID="97df0621-ddba-4462-8134-59bc671c7351" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.591614 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2mf7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.606059 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410ef417-8c38-4aac-9a75-c1a938b0cf8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://911cf90f454467de717e1f9bb20b825a5be262103e70d8507cf0069f6044f56c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3ebb490a3adef5a0bb92ba36215125157bd696a19543743e029f6ef8d7ddaf9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://64d17043c5bd9fe7e126416520a376da7a3779ed00b20eb4d36e1651e0e4deb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://99b634350c36056ac94a43bb1050fb0a41c21441966a10fdfe3aeae30cfd0c2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99b634350c36056ac94a43bb1050fb0a41c21441966a10fdfe3aeae30cfd0c2f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T11:48:52Z\\\",\\\"message\\\":\\\"var.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"ClientsAllowCBOR\\\\\\\" enabled=false\\\\nW0122 11:48:51.105406 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 11:48:51.105599 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0122 11:48:51.106804 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1158037108/tls.crt::/tmp/serving-cert-1158037108/tls.key\\\\\\\"\\\\nI0122 11:48:52.103234 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 11:48:52.104987 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 11:48:52.105003 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 11:48:52.105030 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 11:48:52.105035 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 11:48:52.112491 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 11:48:52.112515 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 11:48:52.112520 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 11:48:52.112524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 11:48:52.112528 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 11:48:52.112531 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 11:48:52.112534 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 11:48:52.112540 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 11:48:52.115022 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:48:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fbc73082c8fc6e4c53f063e1d1446fff9c541a208f3ab11d7c687b5b06958ebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8940990b4eeab47177be3a76a9fc4894d28308e94e4c45050915ec740b778a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8940990b4eeab47177be3a76a9fc4894d28308e94e4c45050915ec740b778a41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.615769 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4822d3cd-955f-493d-a818-acebb52b3602\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://caf1ed97ccb35c8ce9c3321194645452c5875bdadb4b2634d00114c1cedc1056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://91363fceef321ca9f1495cd188f848fae974f94b1b5732adbab842efc578074c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ad731d2d8530eae95dec603d9f7a060ea885c926d453b983464949e2eb4fc2d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d3ad48ffe8f14cdb9c09a6ed7b7da5d4db116a1dac0653103da063524734f466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3ad48ffe8f14cdb9c09a6ed7b7da5d4db116a1dac0653103da063524734f466\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.623515 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7027ae84-efaa-474d-9221-28d77dc0af15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fa31f4d5e4e6f36d31ea882d29804b21ad3c620e6f31cf12aec3085ed0f9f9b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0f232b2402a84370f16fcd5fe49fb57391d5d49d1df96442b937914a9ad6ad54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f232b2402a84370f16fcd5fe49fb57391d5d49d1df96442b937914a9ad6ad54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.636496 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.644207 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tf9nb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9f485fd-0793-40a0-abf8-12fd3b612c87\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdqkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tf9nb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.647567 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.647671 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.647693 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.647722 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.647741 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:15Z","lastTransitionTime":"2026-01-22T11:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.654635 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdb50da0-eb06-4959-b8da-70919924f77e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lt4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lt4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-xzh79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.668741 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.682010 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.701442 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-wrdkl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eaa5719f-fed8-44ac-a759-d2c22d9a2a7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgcrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wrdkl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.715329 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rg989" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97df0621-ddba-4462-8134-59bc671c7351\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rg989\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.729939 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-4lzht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz7fj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4lzht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.742895 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc6361ac-72d0-485c-938e-c58010f57d78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://4b2fc2ec264e1a2f47ef48ae3682ece70e9bcb0c27191badb3dbb25d763d6ed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://d8530587a7dacf7f1e414d966e228d915e25d07d268990a0cbd418ca534f37e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7d6d0b4ca0fcc7c60a642256079a5ccee5482c56dd372189b46a95401451fa45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d115df90471eae10a65aefb390195da3593e903d0ad1a730847db2d29a63cc7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.749604 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.749685 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.749707 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.749738 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.749763 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:15Z","lastTransitionTime":"2026-01-22T11:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.765797 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39c0a299-bb61-4f5d-8177-544cd4abe1ad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://032f1b1cf07b4a93c23326f05479f43fba3a3cf6bb4b9f6c3ae29a76050edfe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:49Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://4985527bf2ab9cc933f70f9ea2994a77482f8a24299c8efc8321a3fd5d86a203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:49Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://209c9652e04417a0d9d549aa169eae5834fadfd0f9dca2eb8620fc81f999192a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:49Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7276e56b446c98c69bd713b22bf844b5cae42b8a0d8da7b8fb151efc140381ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:49Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://36ea9f6809070fa9f7f4b7e5c40fae1648814d3b300a273a28c80ea6035f76a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://74e9a1ca4941ec2eb248aac427dc7bbbb75c43b4680680c221c5eaf186b5986b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74e9a1ca4941ec2eb248aac427dc7bbbb75c43b4680680c221c5eaf186b5986b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://028713bf3e9d1dc75729378d49c58defe47bb7fc8dadd99d93e91304cec6cf84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://028713bf3e9d1dc75729378d49c58defe47bb7fc8dadd99d93e91304cec6cf84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://6b3b9f5c7630e7e80fee0c6bceb378b3069a777f25552b1f309325e0a12134ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b3b9f5c7630e7e80fee0c6bceb378b3069a777f25552b1f309325e0a12134ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.779054 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.796267 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scbgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scbgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dq269\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.806203 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ldwx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dababdca-8afb-452f-865f-54de3aec21d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kndcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kndcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ldwx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.817027 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.827025 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.832041 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.832087 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.832101 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.832118 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.832130 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:15Z","lastTransitionTime":"2026-01-22T11:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:15 crc kubenswrapper[5120]: E0122 11:49:15.844810 5120 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"60403ab6-2e1e-4736-9a34-cfc7e1924d0b\\\",\\\"systemUUID\\\":\\\"382cdad4-0171-4b64-8e1b-b8f3f02e6a19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.848024 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.848072 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.848088 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.848107 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.848120 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:15Z","lastTransitionTime":"2026-01-22T11:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:15 crc kubenswrapper[5120]: E0122 11:49:15.857812 5120 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"60403ab6-2e1e-4736-9a34-cfc7e1924d0b\\\",\\\"systemUUID\\\":\\\"382cdad4-0171-4b64-8e1b-b8f3f02e6a19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.861447 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.861500 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.861514 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.861531 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.861543 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:15Z","lastTransitionTime":"2026-01-22T11:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:15 crc kubenswrapper[5120]: E0122 11:49:15.870154 5120 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"60403ab6-2e1e-4736-9a34-cfc7e1924d0b\\\",\\\"systemUUID\\\":\\\"382cdad4-0171-4b64-8e1b-b8f3f02e6a19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.873305 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.873355 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.873368 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.873386 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.873397 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:15Z","lastTransitionTime":"2026-01-22T11:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:15 crc kubenswrapper[5120]: E0122 11:49:15.882469 5120 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"60403ab6-2e1e-4736-9a34-cfc7e1924d0b\\\",\\\"systemUUID\\\":\\\"382cdad4-0171-4b64-8e1b-b8f3f02e6a19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.888832 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.888991 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.889027 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.889068 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.889099 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:15Z","lastTransitionTime":"2026-01-22T11:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:15 crc kubenswrapper[5120]: E0122 11:49:15.902514 5120 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"60403ab6-2e1e-4736-9a34-cfc7e1924d0b\\\",\\\"systemUUID\\\":\\\"382cdad4-0171-4b64-8e1b-b8f3f02e6a19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:15 crc kubenswrapper[5120]: E0122 11:49:15.902653 5120 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.904471 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.904514 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.904525 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.904543 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.904557 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:15Z","lastTransitionTime":"2026-01-22T11:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.006899 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.007005 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.007026 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.007051 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.007068 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:16Z","lastTransitionTime":"2026-01-22T11:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.109228 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.109296 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.109316 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.109343 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.109360 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:16Z","lastTransitionTime":"2026-01-22T11:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.211283 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.211404 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.211460 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.211495 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.211524 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:16Z","lastTransitionTime":"2026-01-22T11:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.314295 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.314347 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.314360 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.314379 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.314394 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:16Z","lastTransitionTime":"2026-01-22T11:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.416436 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.416491 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.416504 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.416524 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.416536 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:16Z","lastTransitionTime":"2026-01-22T11:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.518792 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.518837 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.518847 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.518861 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.518870 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:16Z","lastTransitionTime":"2026-01-22T11:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.572370 5120 scope.go:117] "RemoveContainer" containerID="99b634350c36056ac94a43bb1050fb0a41c21441966a10fdfe3aeae30cfd0c2f" Jan 22 11:49:16 crc kubenswrapper[5120]: E0122 11:49:16.572538 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 22 11:49:16 crc kubenswrapper[5120]: E0122 11:49:16.572939 5120 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 22 11:49:16 crc kubenswrapper[5120]: E0122 11:49:16.573499 5120 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 22 11:49:16 crc kubenswrapper[5120]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Jan 22 11:49:16 crc kubenswrapper[5120]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Jan 22 11:49:16 crc kubenswrapper[5120]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zz7fj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-4lzht_openshift-multus(67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 22 11:49:16 crc kubenswrapper[5120]: > logger="UnhandledError" Jan 22 11:49:16 crc kubenswrapper[5120]: E0122 11:49:16.573760 5120 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 22 11:49:16 crc kubenswrapper[5120]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Jan 22 11:49:16 crc kubenswrapper[5120]: apiVersion: v1 Jan 22 11:49:16 crc kubenswrapper[5120]: clusters: Jan 22 11:49:16 crc kubenswrapper[5120]: - cluster: Jan 22 11:49:16 crc kubenswrapper[5120]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Jan 22 11:49:16 crc kubenswrapper[5120]: server: https://api-int.crc.testing:6443 Jan 22 11:49:16 crc kubenswrapper[5120]: name: default-cluster Jan 22 11:49:16 crc kubenswrapper[5120]: contexts: Jan 22 11:49:16 crc kubenswrapper[5120]: - context: Jan 22 11:49:16 crc kubenswrapper[5120]: cluster: default-cluster Jan 22 11:49:16 crc kubenswrapper[5120]: namespace: default Jan 22 11:49:16 crc kubenswrapper[5120]: user: default-auth Jan 22 11:49:16 crc kubenswrapper[5120]: name: default-context Jan 22 11:49:16 crc kubenswrapper[5120]: current-context: default-context Jan 22 11:49:16 crc kubenswrapper[5120]: kind: Config Jan 22 11:49:16 crc kubenswrapper[5120]: preferences: {} Jan 22 11:49:16 crc kubenswrapper[5120]: users: Jan 22 11:49:16 crc kubenswrapper[5120]: - name: default-auth Jan 22 11:49:16 crc kubenswrapper[5120]: user: Jan 22 11:49:16 crc kubenswrapper[5120]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Jan 22 11:49:16 crc kubenswrapper[5120]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Jan 22 11:49:16 crc kubenswrapper[5120]: EOF Jan 22 11:49:16 crc kubenswrapper[5120]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zdzrm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-2mf7v_openshift-ovn-kubernetes(dd62bdde-a6c1-42b3-9585-ba64c63cbb51): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 22 11:49:16 crc kubenswrapper[5120]: > logger="UnhandledError" Jan 22 11:49:16 crc kubenswrapper[5120]: E0122 11:49:16.573966 5120 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 22 11:49:16 crc kubenswrapper[5120]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Jan 22 11:49:16 crc kubenswrapper[5120]: set -o allexport Jan 22 11:49:16 crc kubenswrapper[5120]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Jan 22 11:49:16 crc kubenswrapper[5120]: source /etc/kubernetes/apiserver-url.env Jan 22 11:49:16 crc kubenswrapper[5120]: else Jan 22 11:49:16 crc kubenswrapper[5120]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Jan 22 11:49:16 crc kubenswrapper[5120]: exit 1 Jan 22 11:49:16 crc kubenswrapper[5120]: fi Jan 22 11:49:16 crc kubenswrapper[5120]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Jan 22 11:49:16 crc kubenswrapper[5120]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 22 11:49:16 crc kubenswrapper[5120]: > logger="UnhandledError" Jan 22 11:49:16 crc kubenswrapper[5120]: E0122 11:49:16.575009 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" podUID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" Jan 22 11:49:16 crc kubenswrapper[5120]: E0122 11:49:16.575040 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Jan 22 11:49:16 crc kubenswrapper[5120]: E0122 11:49:16.575049 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-4lzht" podUID="67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087" Jan 22 11:49:16 crc kubenswrapper[5120]: E0122 11:49:16.575105 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.620812 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.620861 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.620874 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.620891 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.620905 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:16Z","lastTransitionTime":"2026-01-22T11:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.723247 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.723304 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.723323 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.723345 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.723362 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:16Z","lastTransitionTime":"2026-01-22T11:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.825406 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.825464 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.825476 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.825497 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.825509 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:16Z","lastTransitionTime":"2026-01-22T11:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.928585 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.928647 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.928661 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.928679 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.928690 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:16Z","lastTransitionTime":"2026-01-22T11:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.031313 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.031356 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.031366 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.031380 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.031391 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:17Z","lastTransitionTime":"2026-01-22T11:49:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.133534 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.133600 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.133617 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.133637 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.133650 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:17Z","lastTransitionTime":"2026-01-22T11:49:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.235977 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.236044 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.236058 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.236073 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.236082 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:17Z","lastTransitionTime":"2026-01-22T11:49:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.334145 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.334361 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 11:49:17 crc kubenswrapper[5120]: E0122 11:49:17.334451 5120 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 11:49:17 crc kubenswrapper[5120]: E0122 11:49:17.334563 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:33.334540643 +0000 UTC m=+108.078489034 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.334458 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 11:49:17 crc kubenswrapper[5120]: E0122 11:49:17.334590 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 11:49:17 crc kubenswrapper[5120]: E0122 11:49:17.334608 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 11:49:17 crc kubenswrapper[5120]: E0122 11:49:17.334618 5120 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.334652 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:17 crc kubenswrapper[5120]: E0122 11:49:17.334695 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:33.334679486 +0000 UTC m=+108.078627827 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:49:17 crc kubenswrapper[5120]: E0122 11:49:17.334706 5120 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 11:49:17 crc kubenswrapper[5120]: E0122 11:49:17.334737 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:33.334726368 +0000 UTC m=+108.078674709 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 11:49:17 crc kubenswrapper[5120]: E0122 11:49:17.335073 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 11:49:17 crc kubenswrapper[5120]: E0122 11:49:17.335156 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 11:49:17 crc kubenswrapper[5120]: E0122 11:49:17.335221 5120 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:49:17 crc kubenswrapper[5120]: E0122 11:49:17.335326 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:33.335311062 +0000 UTC m=+108.079259403 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.338271 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.338309 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.338322 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.338336 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.338346 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:17Z","lastTransitionTime":"2026-01-22T11:49:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.436312 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:49:17 crc kubenswrapper[5120]: E0122 11:49:17.436543 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:49:33.436510822 +0000 UTC m=+108.180459163 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.440543 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.440623 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.440642 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.440670 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.440691 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:17Z","lastTransitionTime":"2026-01-22T11:49:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.538794 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dababdca-8afb-452f-865f-54de3aec21d9-metrics-certs\") pod \"network-metrics-daemon-ldwx4\" (UID: \"dababdca-8afb-452f-865f-54de3aec21d9\") " pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:49:17 crc kubenswrapper[5120]: E0122 11:49:17.538999 5120 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 11:49:17 crc kubenswrapper[5120]: E0122 11:49:17.539086 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dababdca-8afb-452f-865f-54de3aec21d9-metrics-certs podName:dababdca-8afb-452f-865f-54de3aec21d9 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:33.539064665 +0000 UTC m=+108.283013026 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dababdca-8afb-452f-865f-54de3aec21d9-metrics-certs") pod "network-metrics-daemon-ldwx4" (UID: "dababdca-8afb-452f-865f-54de3aec21d9") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.542661 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.542807 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.542821 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.542838 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.542848 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:17Z","lastTransitionTime":"2026-01-22T11:49:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.571750 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:17 crc kubenswrapper[5120]: E0122 11:49:17.571999 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.572071 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:49:17 crc kubenswrapper[5120]: E0122 11:49:17.572161 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ldwx4" podUID="dababdca-8afb-452f-865f-54de3aec21d9" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.572167 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.572186 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 11:49:17 crc kubenswrapper[5120]: E0122 11:49:17.572389 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 22 11:49:17 crc kubenswrapper[5120]: E0122 11:49:17.572641 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 22 11:49:17 crc kubenswrapper[5120]: E0122 11:49:17.573425 5120 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 22 11:49:17 crc kubenswrapper[5120]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Jan 22 11:49:17 crc kubenswrapper[5120]: set -euo pipefail Jan 22 11:49:17 crc kubenswrapper[5120]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Jan 22 11:49:17 crc kubenswrapper[5120]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Jan 22 11:49:17 crc kubenswrapper[5120]: # As the secret mount is optional we must wait for the files to be present. Jan 22 11:49:17 crc kubenswrapper[5120]: # The service is created in monitor.yaml and this is created in sdn.yaml. Jan 22 11:49:17 crc kubenswrapper[5120]: TS=$(date +%s) Jan 22 11:49:17 crc kubenswrapper[5120]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Jan 22 11:49:17 crc kubenswrapper[5120]: HAS_LOGGED_INFO=0 Jan 22 11:49:17 crc kubenswrapper[5120]: Jan 22 11:49:17 crc kubenswrapper[5120]: log_missing_certs(){ Jan 22 11:49:17 crc kubenswrapper[5120]: CUR_TS=$(date +%s) Jan 22 11:49:17 crc kubenswrapper[5120]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Jan 22 11:49:17 crc kubenswrapper[5120]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Jan 22 11:49:17 crc kubenswrapper[5120]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Jan 22 11:49:17 crc kubenswrapper[5120]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Jan 22 11:49:17 crc kubenswrapper[5120]: HAS_LOGGED_INFO=1 Jan 22 11:49:17 crc kubenswrapper[5120]: fi Jan 22 11:49:17 crc kubenswrapper[5120]: } Jan 22 11:49:17 crc kubenswrapper[5120]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Jan 22 11:49:17 crc kubenswrapper[5120]: log_missing_certs Jan 22 11:49:17 crc kubenswrapper[5120]: sleep 5 Jan 22 11:49:17 crc kubenswrapper[5120]: done Jan 22 11:49:17 crc kubenswrapper[5120]: Jan 22 11:49:17 crc kubenswrapper[5120]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Jan 22 11:49:17 crc kubenswrapper[5120]: exec /usr/bin/kube-rbac-proxy \ Jan 22 11:49:17 crc kubenswrapper[5120]: --logtostderr \ Jan 22 11:49:17 crc kubenswrapper[5120]: --secure-listen-address=:9108 \ Jan 22 11:49:17 crc kubenswrapper[5120]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Jan 22 11:49:17 crc kubenswrapper[5120]: --upstream=http://127.0.0.1:29108/ \ Jan 22 11:49:17 crc kubenswrapper[5120]: --tls-private-key-file=${TLS_PK} \ Jan 22 11:49:17 crc kubenswrapper[5120]: --tls-cert-file=${TLS_CERT} Jan 22 11:49:17 crc kubenswrapper[5120]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9lt4m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-xzh79_openshift-ovn-kubernetes(cdb50da0-eb06-4959-b8da-70919924f77e): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 22 11:49:17 crc kubenswrapper[5120]: > logger="UnhandledError" Jan 22 11:49:17 crc kubenswrapper[5120]: E0122 11:49:17.573627 5120 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 22 11:49:17 crc kubenswrapper[5120]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Jan 22 11:49:17 crc kubenswrapper[5120]: set -uo pipefail Jan 22 11:49:17 crc kubenswrapper[5120]: Jan 22 11:49:17 crc kubenswrapper[5120]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Jan 22 11:49:17 crc kubenswrapper[5120]: Jan 22 11:49:17 crc kubenswrapper[5120]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Jan 22 11:49:17 crc kubenswrapper[5120]: HOSTS_FILE="/etc/hosts" Jan 22 11:49:17 crc kubenswrapper[5120]: TEMP_FILE="/tmp/hosts.tmp" Jan 22 11:49:17 crc kubenswrapper[5120]: Jan 22 11:49:17 crc kubenswrapper[5120]: IFS=', ' read -r -a services <<< "${SERVICES}" Jan 22 11:49:17 crc kubenswrapper[5120]: Jan 22 11:49:17 crc kubenswrapper[5120]: # Make a temporary file with the old hosts file's attributes. Jan 22 11:49:17 crc kubenswrapper[5120]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Jan 22 11:49:17 crc kubenswrapper[5120]: echo "Failed to preserve hosts file. Exiting." Jan 22 11:49:17 crc kubenswrapper[5120]: exit 1 Jan 22 11:49:17 crc kubenswrapper[5120]: fi Jan 22 11:49:17 crc kubenswrapper[5120]: Jan 22 11:49:17 crc kubenswrapper[5120]: while true; do Jan 22 11:49:17 crc kubenswrapper[5120]: declare -A svc_ips Jan 22 11:49:17 crc kubenswrapper[5120]: for svc in "${services[@]}"; do Jan 22 11:49:17 crc kubenswrapper[5120]: # Fetch service IP from cluster dns if present. We make several tries Jan 22 11:49:17 crc kubenswrapper[5120]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Jan 22 11:49:17 crc kubenswrapper[5120]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Jan 22 11:49:17 crc kubenswrapper[5120]: # support UDP loadbalancers and require reaching DNS through TCP. Jan 22 11:49:17 crc kubenswrapper[5120]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 22 11:49:17 crc kubenswrapper[5120]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 22 11:49:17 crc kubenswrapper[5120]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 22 11:49:17 crc kubenswrapper[5120]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Jan 22 11:49:17 crc kubenswrapper[5120]: for i in ${!cmds[*]} Jan 22 11:49:17 crc kubenswrapper[5120]: do Jan 22 11:49:17 crc kubenswrapper[5120]: ips=($(eval "${cmds[i]}")) Jan 22 11:49:17 crc kubenswrapper[5120]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Jan 22 11:49:17 crc kubenswrapper[5120]: svc_ips["${svc}"]="${ips[@]}" Jan 22 11:49:17 crc kubenswrapper[5120]: break Jan 22 11:49:17 crc kubenswrapper[5120]: fi Jan 22 11:49:17 crc kubenswrapper[5120]: done Jan 22 11:49:17 crc kubenswrapper[5120]: done Jan 22 11:49:17 crc kubenswrapper[5120]: Jan 22 11:49:17 crc kubenswrapper[5120]: # Update /etc/hosts only if we get valid service IPs Jan 22 11:49:17 crc kubenswrapper[5120]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Jan 22 11:49:17 crc kubenswrapper[5120]: # Stale entries could exist in /etc/hosts if the service is deleted Jan 22 11:49:17 crc kubenswrapper[5120]: if [[ -n "${svc_ips[*]-}" ]]; then Jan 22 11:49:17 crc kubenswrapper[5120]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Jan 22 11:49:17 crc kubenswrapper[5120]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Jan 22 11:49:17 crc kubenswrapper[5120]: # Only continue rebuilding the hosts entries if its original content is preserved Jan 22 11:49:17 crc kubenswrapper[5120]: sleep 60 & wait Jan 22 11:49:17 crc kubenswrapper[5120]: continue Jan 22 11:49:17 crc kubenswrapper[5120]: fi Jan 22 11:49:17 crc kubenswrapper[5120]: Jan 22 11:49:17 crc kubenswrapper[5120]: # Append resolver entries for services Jan 22 11:49:17 crc kubenswrapper[5120]: rc=0 Jan 22 11:49:17 crc kubenswrapper[5120]: for svc in "${!svc_ips[@]}"; do Jan 22 11:49:17 crc kubenswrapper[5120]: for ip in ${svc_ips[${svc}]}; do Jan 22 11:49:17 crc kubenswrapper[5120]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Jan 22 11:49:17 crc kubenswrapper[5120]: done Jan 22 11:49:17 crc kubenswrapper[5120]: done Jan 22 11:49:17 crc kubenswrapper[5120]: if [[ $rc -ne 0 ]]; then Jan 22 11:49:17 crc kubenswrapper[5120]: sleep 60 & wait Jan 22 11:49:17 crc kubenswrapper[5120]: continue Jan 22 11:49:17 crc kubenswrapper[5120]: fi Jan 22 11:49:17 crc kubenswrapper[5120]: Jan 22 11:49:17 crc kubenswrapper[5120]: Jan 22 11:49:17 crc kubenswrapper[5120]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Jan 22 11:49:17 crc kubenswrapper[5120]: # Replace /etc/hosts with our modified version if needed Jan 22 11:49:17 crc kubenswrapper[5120]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Jan 22 11:49:17 crc kubenswrapper[5120]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Jan 22 11:49:17 crc kubenswrapper[5120]: fi Jan 22 11:49:17 crc kubenswrapper[5120]: sleep 60 & wait Jan 22 11:49:17 crc kubenswrapper[5120]: unset svc_ips Jan 22 11:49:17 crc kubenswrapper[5120]: done Jan 22 11:49:17 crc kubenswrapper[5120]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dgcrk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-wrdkl_openshift-dns(eaa5719f-fed8-44ac-a759-d2c22d9a2a7f): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 22 11:49:17 crc kubenswrapper[5120]: > logger="UnhandledError" Jan 22 11:49:17 crc kubenswrapper[5120]: E0122 11:49:17.574801 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-wrdkl" podUID="eaa5719f-fed8-44ac-a759-d2c22d9a2a7f" Jan 22 11:49:17 crc kubenswrapper[5120]: E0122 11:49:17.576606 5120 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 22 11:49:17 crc kubenswrapper[5120]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 22 11:49:17 crc kubenswrapper[5120]: if [[ -f "/env/_master" ]]; then Jan 22 11:49:17 crc kubenswrapper[5120]: set -o allexport Jan 22 11:49:17 crc kubenswrapper[5120]: source "/env/_master" Jan 22 11:49:17 crc kubenswrapper[5120]: set +o allexport Jan 22 11:49:17 crc kubenswrapper[5120]: fi Jan 22 11:49:17 crc kubenswrapper[5120]: Jan 22 11:49:17 crc kubenswrapper[5120]: ovn_v4_join_subnet_opt= Jan 22 11:49:17 crc kubenswrapper[5120]: if [[ "" != "" ]]; then Jan 22 11:49:17 crc kubenswrapper[5120]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Jan 22 11:49:17 crc kubenswrapper[5120]: fi Jan 22 11:49:17 crc kubenswrapper[5120]: ovn_v6_join_subnet_opt= Jan 22 11:49:17 crc kubenswrapper[5120]: if [[ "" != "" ]]; then Jan 22 11:49:17 crc kubenswrapper[5120]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Jan 22 11:49:17 crc kubenswrapper[5120]: fi Jan 22 11:49:17 crc kubenswrapper[5120]: Jan 22 11:49:17 crc kubenswrapper[5120]: ovn_v4_transit_switch_subnet_opt= Jan 22 11:49:17 crc kubenswrapper[5120]: if [[ "" != "" ]]; then Jan 22 11:49:17 crc kubenswrapper[5120]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Jan 22 11:49:17 crc kubenswrapper[5120]: fi Jan 22 11:49:17 crc kubenswrapper[5120]: ovn_v6_transit_switch_subnet_opt= Jan 22 11:49:17 crc kubenswrapper[5120]: if [[ "" != "" ]]; then Jan 22 11:49:17 crc kubenswrapper[5120]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Jan 22 11:49:17 crc kubenswrapper[5120]: fi Jan 22 11:49:17 crc kubenswrapper[5120]: Jan 22 11:49:17 crc kubenswrapper[5120]: dns_name_resolver_enabled_flag= Jan 22 11:49:17 crc kubenswrapper[5120]: if [[ "false" == "true" ]]; then Jan 22 11:49:17 crc kubenswrapper[5120]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Jan 22 11:49:17 crc kubenswrapper[5120]: fi Jan 22 11:49:17 crc kubenswrapper[5120]: Jan 22 11:49:17 crc kubenswrapper[5120]: persistent_ips_enabled_flag="--enable-persistent-ips" Jan 22 11:49:17 crc kubenswrapper[5120]: Jan 22 11:49:17 crc kubenswrapper[5120]: # This is needed so that converting clusters from GA to TP Jan 22 11:49:17 crc kubenswrapper[5120]: # will rollout control plane pods as well Jan 22 11:49:17 crc kubenswrapper[5120]: network_segmentation_enabled_flag= Jan 22 11:49:17 crc kubenswrapper[5120]: multi_network_enabled_flag= Jan 22 11:49:17 crc kubenswrapper[5120]: if [[ "true" == "true" ]]; then Jan 22 11:49:17 crc kubenswrapper[5120]: multi_network_enabled_flag="--enable-multi-network" Jan 22 11:49:17 crc kubenswrapper[5120]: fi Jan 22 11:49:17 crc kubenswrapper[5120]: if [[ "true" == "true" ]]; then Jan 22 11:49:17 crc kubenswrapper[5120]: if [[ "true" != "true" ]]; then Jan 22 11:49:17 crc kubenswrapper[5120]: multi_network_enabled_flag="--enable-multi-network" Jan 22 11:49:17 crc kubenswrapper[5120]: fi Jan 22 11:49:17 crc kubenswrapper[5120]: network_segmentation_enabled_flag="--enable-network-segmentation" Jan 22 11:49:17 crc kubenswrapper[5120]: fi Jan 22 11:49:17 crc kubenswrapper[5120]: Jan 22 11:49:17 crc kubenswrapper[5120]: route_advertisements_enable_flag= Jan 22 11:49:17 crc kubenswrapper[5120]: if [[ "false" == "true" ]]; then Jan 22 11:49:17 crc kubenswrapper[5120]: route_advertisements_enable_flag="--enable-route-advertisements" Jan 22 11:49:17 crc kubenswrapper[5120]: fi Jan 22 11:49:17 crc kubenswrapper[5120]: Jan 22 11:49:17 crc kubenswrapper[5120]: preconfigured_udn_addresses_enable_flag= Jan 22 11:49:17 crc kubenswrapper[5120]: if [[ "false" == "true" ]]; then Jan 22 11:49:17 crc kubenswrapper[5120]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Jan 22 11:49:17 crc kubenswrapper[5120]: fi Jan 22 11:49:17 crc kubenswrapper[5120]: Jan 22 11:49:17 crc kubenswrapper[5120]: # Enable multi-network policy if configured (control-plane always full mode) Jan 22 11:49:17 crc kubenswrapper[5120]: multi_network_policy_enabled_flag= Jan 22 11:49:17 crc kubenswrapper[5120]: if [[ "false" == "true" ]]; then Jan 22 11:49:17 crc kubenswrapper[5120]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Jan 22 11:49:17 crc kubenswrapper[5120]: fi Jan 22 11:49:17 crc kubenswrapper[5120]: Jan 22 11:49:17 crc kubenswrapper[5120]: # Enable admin network policy if configured (control-plane always full mode) Jan 22 11:49:17 crc kubenswrapper[5120]: admin_network_policy_enabled_flag= Jan 22 11:49:17 crc kubenswrapper[5120]: if [[ "true" == "true" ]]; then Jan 22 11:49:17 crc kubenswrapper[5120]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Jan 22 11:49:17 crc kubenswrapper[5120]: fi Jan 22 11:49:17 crc kubenswrapper[5120]: Jan 22 11:49:17 crc kubenswrapper[5120]: if [ "shared" == "shared" ]; then Jan 22 11:49:17 crc kubenswrapper[5120]: gateway_mode_flags="--gateway-mode shared" Jan 22 11:49:17 crc kubenswrapper[5120]: elif [ "shared" == "local" ]; then Jan 22 11:49:17 crc kubenswrapper[5120]: gateway_mode_flags="--gateway-mode local" Jan 22 11:49:17 crc kubenswrapper[5120]: else Jan 22 11:49:17 crc kubenswrapper[5120]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Jan 22 11:49:17 crc kubenswrapper[5120]: exit 1 Jan 22 11:49:17 crc kubenswrapper[5120]: fi Jan 22 11:49:17 crc kubenswrapper[5120]: Jan 22 11:49:17 crc kubenswrapper[5120]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Jan 22 11:49:17 crc kubenswrapper[5120]: exec /usr/bin/ovnkube \ Jan 22 11:49:17 crc kubenswrapper[5120]: --enable-interconnect \ Jan 22 11:49:17 crc kubenswrapper[5120]: --init-cluster-manager "${K8S_NODE}" \ Jan 22 11:49:17 crc kubenswrapper[5120]: --config-file=/run/ovnkube-config/ovnkube.conf \ Jan 22 11:49:17 crc kubenswrapper[5120]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Jan 22 11:49:17 crc kubenswrapper[5120]: --metrics-bind-address "127.0.0.1:29108" \ Jan 22 11:49:17 crc kubenswrapper[5120]: --metrics-enable-pprof \ Jan 22 11:49:17 crc kubenswrapper[5120]: --metrics-enable-config-duration \ Jan 22 11:49:17 crc kubenswrapper[5120]: ${ovn_v4_join_subnet_opt} \ Jan 22 11:49:17 crc kubenswrapper[5120]: ${ovn_v6_join_subnet_opt} \ Jan 22 11:49:17 crc kubenswrapper[5120]: ${ovn_v4_transit_switch_subnet_opt} \ Jan 22 11:49:17 crc kubenswrapper[5120]: ${ovn_v6_transit_switch_subnet_opt} \ Jan 22 11:49:17 crc kubenswrapper[5120]: ${dns_name_resolver_enabled_flag} \ Jan 22 11:49:17 crc kubenswrapper[5120]: ${persistent_ips_enabled_flag} \ Jan 22 11:49:17 crc kubenswrapper[5120]: ${multi_network_enabled_flag} \ Jan 22 11:49:17 crc kubenswrapper[5120]: ${network_segmentation_enabled_flag} \ Jan 22 11:49:17 crc kubenswrapper[5120]: ${gateway_mode_flags} \ Jan 22 11:49:17 crc kubenswrapper[5120]: ${route_advertisements_enable_flag} \ Jan 22 11:49:17 crc kubenswrapper[5120]: ${preconfigured_udn_addresses_enable_flag} \ Jan 22 11:49:17 crc kubenswrapper[5120]: --enable-egress-ip=true \ Jan 22 11:49:17 crc kubenswrapper[5120]: --enable-egress-firewall=true \ Jan 22 11:49:17 crc kubenswrapper[5120]: --enable-egress-qos=true \ Jan 22 11:49:17 crc kubenswrapper[5120]: --enable-egress-service=true \ Jan 22 11:49:17 crc kubenswrapper[5120]: --enable-multicast \ Jan 22 11:49:17 crc kubenswrapper[5120]: --enable-multi-external-gateway=true \ Jan 22 11:49:17 crc kubenswrapper[5120]: ${multi_network_policy_enabled_flag} \ Jan 22 11:49:17 crc kubenswrapper[5120]: ${admin_network_policy_enabled_flag} Jan 22 11:49:17 crc kubenswrapper[5120]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9lt4m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-xzh79_openshift-ovn-kubernetes(cdb50da0-eb06-4959-b8da-70919924f77e): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 22 11:49:17 crc kubenswrapper[5120]: > logger="UnhandledError" Jan 22 11:49:17 crc kubenswrapper[5120]: E0122 11:49:17.577740 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79" podUID="cdb50da0-eb06-4959-b8da-70919924f77e" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.645898 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.645941 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.645985 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.646010 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.646025 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:17Z","lastTransitionTime":"2026-01-22T11:49:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.748261 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.748321 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.748334 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.748353 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.748366 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:17Z","lastTransitionTime":"2026-01-22T11:49:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.851871 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.852011 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.852044 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.852127 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.852165 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:17Z","lastTransitionTime":"2026-01-22T11:49:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.955207 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.955260 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.955276 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.955294 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.955305 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:17Z","lastTransitionTime":"2026-01-22T11:49:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.057565 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.057647 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.057671 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.057699 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.057723 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:18Z","lastTransitionTime":"2026-01-22T11:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.161072 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.161143 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.161156 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.161176 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.161190 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:18Z","lastTransitionTime":"2026-01-22T11:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.263382 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.263430 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.263440 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.263453 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.263463 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:18Z","lastTransitionTime":"2026-01-22T11:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.366382 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.366485 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.366513 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.366546 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.366570 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:18Z","lastTransitionTime":"2026-01-22T11:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.469254 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.469307 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.469319 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.469342 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.469368 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:18Z","lastTransitionTime":"2026-01-22T11:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.571723 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.571815 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.571870 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.571899 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.571925 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:18Z","lastTransitionTime":"2026-01-22T11:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.675409 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.675468 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.675484 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.675505 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.675517 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:18Z","lastTransitionTime":"2026-01-22T11:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.777090 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.777131 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.777142 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.777157 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.777167 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:18Z","lastTransitionTime":"2026-01-22T11:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.879330 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.879381 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.879395 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.879411 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.879424 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:18Z","lastTransitionTime":"2026-01-22T11:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.981724 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.981800 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.981814 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.981832 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.981844 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:18Z","lastTransitionTime":"2026-01-22T11:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.084157 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.084207 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.084217 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.084232 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.084244 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:19Z","lastTransitionTime":"2026-01-22T11:49:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.186494 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.186564 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.186579 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.186604 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.186620 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:19Z","lastTransitionTime":"2026-01-22T11:49:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.288660 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.288701 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.288711 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.288724 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.288734 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:19Z","lastTransitionTime":"2026-01-22T11:49:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.390748 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.390825 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.390837 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.390851 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.390860 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:19Z","lastTransitionTime":"2026-01-22T11:49:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.492674 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.492727 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.492740 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.492759 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.492771 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:19Z","lastTransitionTime":"2026-01-22T11:49:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.571366 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.571408 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 11:49:19 crc kubenswrapper[5120]: E0122 11:49:19.571510 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 22 11:49:19 crc kubenswrapper[5120]: E0122 11:49:19.571609 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.571705 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.571730 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:19 crc kubenswrapper[5120]: E0122 11:49:19.571838 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ldwx4" podUID="dababdca-8afb-452f-865f-54de3aec21d9" Jan 22 11:49:19 crc kubenswrapper[5120]: E0122 11:49:19.572005 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.594677 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.594729 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.594740 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.594754 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.594764 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:19Z","lastTransitionTime":"2026-01-22T11:49:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.696435 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.696471 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.696481 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.696494 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.696504 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:19Z","lastTransitionTime":"2026-01-22T11:49:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.797851 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.797890 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.797898 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.797913 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.797923 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:19Z","lastTransitionTime":"2026-01-22T11:49:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.900486 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.900538 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.900550 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.900569 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.900581 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:19Z","lastTransitionTime":"2026-01-22T11:49:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.003255 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.003309 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.003322 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.003338 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.003347 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:20Z","lastTransitionTime":"2026-01-22T11:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.045005 5120 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.104714 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.104754 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.104766 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.104782 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.104792 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:20Z","lastTransitionTime":"2026-01-22T11:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.207509 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.207603 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.207621 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.207645 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.207659 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:20Z","lastTransitionTime":"2026-01-22T11:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.309926 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.310004 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.310024 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.310039 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.310048 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:20Z","lastTransitionTime":"2026-01-22T11:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.411601 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.411646 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.411655 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.411669 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.411679 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:20Z","lastTransitionTime":"2026-01-22T11:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.513161 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.513211 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.513223 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.513238 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.513250 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:20Z","lastTransitionTime":"2026-01-22T11:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.615532 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.615575 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.615584 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.615598 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.615613 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:20Z","lastTransitionTime":"2026-01-22T11:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.718012 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.718055 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.718093 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.718109 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.718120 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:20Z","lastTransitionTime":"2026-01-22T11:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.820411 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.820470 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.820482 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.820500 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.820512 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:20Z","lastTransitionTime":"2026-01-22T11:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.922510 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.922551 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.922560 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.922574 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.922584 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:20Z","lastTransitionTime":"2026-01-22T11:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.024414 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.024474 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.024483 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.024500 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.024508 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:21Z","lastTransitionTime":"2026-01-22T11:49:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.127043 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.127083 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.127093 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.127106 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.127116 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:21Z","lastTransitionTime":"2026-01-22T11:49:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.229049 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.229121 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.229138 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.229156 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.229167 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:21Z","lastTransitionTime":"2026-01-22T11:49:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.331643 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.331847 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.331883 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.331909 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.331992 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:21Z","lastTransitionTime":"2026-01-22T11:49:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.433812 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.433859 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.433873 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.433890 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.433902 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:21Z","lastTransitionTime":"2026-01-22T11:49:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.536418 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.536702 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.536776 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.536889 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.536974 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:21Z","lastTransitionTime":"2026-01-22T11:49:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.572061 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 11:49:21 crc kubenswrapper[5120]: E0122 11:49:21.572456 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.572149 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 11:49:21 crc kubenswrapper[5120]: E0122 11:49:21.572675 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.572129 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:49:21 crc kubenswrapper[5120]: E0122 11:49:21.572909 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ldwx4" podUID="dababdca-8afb-452f-865f-54de3aec21d9" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.572208 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:21 crc kubenswrapper[5120]: E0122 11:49:21.573336 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.639170 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.639279 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.639304 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.639341 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.639365 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:21Z","lastTransitionTime":"2026-01-22T11:49:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.742162 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.742209 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.742219 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.742236 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.742245 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:21Z","lastTransitionTime":"2026-01-22T11:49:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.844559 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.844604 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.844616 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.844631 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.844697 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:21Z","lastTransitionTime":"2026-01-22T11:49:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.946999 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.947050 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.947066 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.947085 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.947097 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:21Z","lastTransitionTime":"2026-01-22T11:49:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.049058 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.049110 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.049128 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.049144 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.049154 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:22Z","lastTransitionTime":"2026-01-22T11:49:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.150873 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.150918 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.150931 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.150947 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.150977 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:22Z","lastTransitionTime":"2026-01-22T11:49:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.253452 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.253562 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.253577 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.253595 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.253609 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:22Z","lastTransitionTime":"2026-01-22T11:49:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.355858 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.355914 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.355924 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.355942 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.355969 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:22Z","lastTransitionTime":"2026-01-22T11:49:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.458528 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.458588 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.458600 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.458622 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.458636 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:22Z","lastTransitionTime":"2026-01-22T11:49:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.560917 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.560980 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.560990 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.561005 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.561014 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:22Z","lastTransitionTime":"2026-01-22T11:49:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.663974 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.664058 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.664068 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.664085 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.664098 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:22Z","lastTransitionTime":"2026-01-22T11:49:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.766541 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.766605 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.766619 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.766636 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.766648 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:22Z","lastTransitionTime":"2026-01-22T11:49:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.868910 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.868999 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.869018 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.869037 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.869051 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:22Z","lastTransitionTime":"2026-01-22T11:49:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.971396 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.971449 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.971458 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.971476 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.971490 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:22Z","lastTransitionTime":"2026-01-22T11:49:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.073128 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.073177 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.073190 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.073207 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.073219 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:23Z","lastTransitionTime":"2026-01-22T11:49:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.175043 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.175091 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.175105 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.175121 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.175132 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:23Z","lastTransitionTime":"2026-01-22T11:49:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.277196 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.277246 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.277263 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.277278 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.277289 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:23Z","lastTransitionTime":"2026-01-22T11:49:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.378988 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.379054 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.379064 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.379083 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.379094 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:23Z","lastTransitionTime":"2026-01-22T11:49:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.480887 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.481022 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.481060 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.481089 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.481111 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:23Z","lastTransitionTime":"2026-01-22T11:49:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.571543 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.571566 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 11:49:23 crc kubenswrapper[5120]: E0122 11:49:23.571677 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.571555 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:49:23 crc kubenswrapper[5120]: E0122 11:49:23.571942 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 22 11:49:23 crc kubenswrapper[5120]: E0122 11:49:23.572023 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ldwx4" podUID="dababdca-8afb-452f-865f-54de3aec21d9" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.572086 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:23 crc kubenswrapper[5120]: E0122 11:49:23.572225 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.582778 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.582808 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.582816 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.582829 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.582838 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:23Z","lastTransitionTime":"2026-01-22T11:49:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.685249 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.685326 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.685354 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.685384 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.685407 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:23Z","lastTransitionTime":"2026-01-22T11:49:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.787385 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.787444 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.787457 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.787472 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.787485 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:23Z","lastTransitionTime":"2026-01-22T11:49:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.890139 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.890193 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.890203 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.890218 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.890228 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:23Z","lastTransitionTime":"2026-01-22T11:49:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.991913 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.991981 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.991992 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.992006 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.992015 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:23Z","lastTransitionTime":"2026-01-22T11:49:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.094137 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.094191 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.094200 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.094214 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.094224 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:24Z","lastTransitionTime":"2026-01-22T11:49:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.195906 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.195973 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.195985 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.196001 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.196012 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:24Z","lastTransitionTime":"2026-01-22T11:49:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.298151 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.298189 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.298198 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.298212 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.298221 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:24Z","lastTransitionTime":"2026-01-22T11:49:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.400131 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.400183 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.400197 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.400213 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.400223 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:24Z","lastTransitionTime":"2026-01-22T11:49:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.502998 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.503051 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.503065 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.503080 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.503091 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:24Z","lastTransitionTime":"2026-01-22T11:49:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.605655 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.605714 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.605728 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.605748 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.605761 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:24Z","lastTransitionTime":"2026-01-22T11:49:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.710339 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.710811 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.710825 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.710846 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.710858 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:24Z","lastTransitionTime":"2026-01-22T11:49:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.813750 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.813884 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.814030 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.814123 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.814159 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:24Z","lastTransitionTime":"2026-01-22T11:49:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.917230 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.917283 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.917293 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.917309 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.917332 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:24Z","lastTransitionTime":"2026-01-22T11:49:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.984576 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" event={"ID":"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9","Type":"ContainerStarted","Data":"57cccfe61e8f332b9a2398e2ca5f128b7473e871fd825bfdbb35d9ba91022b81"} Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.984662 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" event={"ID":"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9","Type":"ContainerStarted","Data":"850c532d98a8bbc54351ca3b791b2314fd23331e43f96e8f0161ba791781ae24"} Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.006326 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rg989" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97df0621-ddba-4462-8134-59bc671c7351\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rg989\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.022173 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-4lzht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz7fj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4lzht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.022611 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.022650 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.022662 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.022679 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.022692 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:25Z","lastTransitionTime":"2026-01-22T11:49:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.038438 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc6361ac-72d0-485c-938e-c58010f57d78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://4b2fc2ec264e1a2f47ef48ae3682ece70e9bcb0c27191badb3dbb25d763d6ed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://d8530587a7dacf7f1e414d966e228d915e25d07d268990a0cbd418ca534f37e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7d6d0b4ca0fcc7c60a642256079a5ccee5482c56dd372189b46a95401451fa45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d115df90471eae10a65aefb390195da3593e903d0ad1a730847db2d29a63cc7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.058760 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39c0a299-bb61-4f5d-8177-544cd4abe1ad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://032f1b1cf07b4a93c23326f05479f43fba3a3cf6bb4b9f6c3ae29a76050edfe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:49Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://4985527bf2ab9cc933f70f9ea2994a77482f8a24299c8efc8321a3fd5d86a203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:49Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://209c9652e04417a0d9d549aa169eae5834fadfd0f9dca2eb8620fc81f999192a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:49Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7276e56b446c98c69bd713b22bf844b5cae42b8a0d8da7b8fb151efc140381ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:49Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://36ea9f6809070fa9f7f4b7e5c40fae1648814d3b300a273a28c80ea6035f76a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://74e9a1ca4941ec2eb248aac427dc7bbbb75c43b4680680c221c5eaf186b5986b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74e9a1ca4941ec2eb248aac427dc7bbbb75c43b4680680c221c5eaf186b5986b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://028713bf3e9d1dc75729378d49c58defe47bb7fc8dadd99d93e91304cec6cf84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://028713bf3e9d1dc75729378d49c58defe47bb7fc8dadd99d93e91304cec6cf84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://6b3b9f5c7630e7e80fee0c6bceb378b3069a777f25552b1f309325e0a12134ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b3b9f5c7630e7e80fee0c6bceb378b3069a777f25552b1f309325e0a12134ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.072190 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.084571 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://57cccfe61e8f332b9a2398e2ca5f128b7473e871fd825bfdbb35d9ba91022b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:49:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scbgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://850c532d98a8bbc54351ca3b791b2314fd23331e43f96e8f0161ba791781ae24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:49:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scbgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dq269\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.095634 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ldwx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dababdca-8afb-452f-865f-54de3aec21d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kndcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kndcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ldwx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.110046 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.120447 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.124526 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.124579 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.124594 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.124612 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.124623 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:25Z","lastTransitionTime":"2026-01-22T11:49:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.138550 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2mf7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.153831 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410ef417-8c38-4aac-9a75-c1a938b0cf8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://911cf90f454467de717e1f9bb20b825a5be262103e70d8507cf0069f6044f56c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3ebb490a3adef5a0bb92ba36215125157bd696a19543743e029f6ef8d7ddaf9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://64d17043c5bd9fe7e126416520a376da7a3779ed00b20eb4d36e1651e0e4deb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://99b634350c36056ac94a43bb1050fb0a41c21441966a10fdfe3aeae30cfd0c2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99b634350c36056ac94a43bb1050fb0a41c21441966a10fdfe3aeae30cfd0c2f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T11:48:52Z\\\",\\\"message\\\":\\\"var.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"ClientsAllowCBOR\\\\\\\" enabled=false\\\\nW0122 11:48:51.105406 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 11:48:51.105599 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0122 11:48:51.106804 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1158037108/tls.crt::/tmp/serving-cert-1158037108/tls.key\\\\\\\"\\\\nI0122 11:48:52.103234 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 11:48:52.104987 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 11:48:52.105003 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 11:48:52.105030 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 11:48:52.105035 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 11:48:52.112491 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 11:48:52.112515 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 11:48:52.112520 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 11:48:52.112524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 11:48:52.112528 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 11:48:52.112531 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 11:48:52.112534 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 11:48:52.112540 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 11:48:52.115022 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:48:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fbc73082c8fc6e4c53f063e1d1446fff9c541a208f3ab11d7c687b5b06958ebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8940990b4eeab47177be3a76a9fc4894d28308e94e4c45050915ec740b778a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8940990b4eeab47177be3a76a9fc4894d28308e94e4c45050915ec740b778a41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.167619 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4822d3cd-955f-493d-a818-acebb52b3602\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://caf1ed97ccb35c8ce9c3321194645452c5875bdadb4b2634d00114c1cedc1056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://91363fceef321ca9f1495cd188f848fae974f94b1b5732adbab842efc578074c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ad731d2d8530eae95dec603d9f7a060ea885c926d453b983464949e2eb4fc2d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d3ad48ffe8f14cdb9c09a6ed7b7da5d4db116a1dac0653103da063524734f466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3ad48ffe8f14cdb9c09a6ed7b7da5d4db116a1dac0653103da063524734f466\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.176146 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7027ae84-efaa-474d-9221-28d77dc0af15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fa31f4d5e4e6f36d31ea882d29804b21ad3c620e6f31cf12aec3085ed0f9f9b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0f232b2402a84370f16fcd5fe49fb57391d5d49d1df96442b937914a9ad6ad54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f232b2402a84370f16fcd5fe49fb57391d5d49d1df96442b937914a9ad6ad54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.187247 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.196000 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tf9nb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9f485fd-0793-40a0-abf8-12fd3b612c87\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdqkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tf9nb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.205425 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdb50da0-eb06-4959-b8da-70919924f77e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lt4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lt4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-xzh79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.216844 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.227377 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.227437 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.227449 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.227468 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.227480 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:25Z","lastTransitionTime":"2026-01-22T11:49:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.230671 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.242044 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-wrdkl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eaa5719f-fed8-44ac-a759-d2c22d9a2a7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgcrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wrdkl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.329476 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.329931 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.329943 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.329985 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.329999 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:25Z","lastTransitionTime":"2026-01-22T11:49:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.433397 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.433488 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.433508 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.433536 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.433556 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:25Z","lastTransitionTime":"2026-01-22T11:49:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.536151 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.536205 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.536214 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.536231 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.536245 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:25Z","lastTransitionTime":"2026-01-22T11:49:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.571803 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.571834 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.571988 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:25 crc kubenswrapper[5120]: E0122 11:49:25.571987 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.572150 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 11:49:25 crc kubenswrapper[5120]: E0122 11:49:25.572169 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 22 11:49:25 crc kubenswrapper[5120]: E0122 11:49:25.572262 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 22 11:49:25 crc kubenswrapper[5120]: E0122 11:49:25.572360 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ldwx4" podUID="dababdca-8afb-452f-865f-54de3aec21d9" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.582616 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.598423 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2mf7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.609884 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410ef417-8c38-4aac-9a75-c1a938b0cf8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://911cf90f454467de717e1f9bb20b825a5be262103e70d8507cf0069f6044f56c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3ebb490a3adef5a0bb92ba36215125157bd696a19543743e029f6ef8d7ddaf9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://64d17043c5bd9fe7e126416520a376da7a3779ed00b20eb4d36e1651e0e4deb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://99b634350c36056ac94a43bb1050fb0a41c21441966a10fdfe3aeae30cfd0c2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99b634350c36056ac94a43bb1050fb0a41c21441966a10fdfe3aeae30cfd0c2f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T11:48:52Z\\\",\\\"message\\\":\\\"var.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"ClientsAllowCBOR\\\\\\\" enabled=false\\\\nW0122 11:48:51.105406 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 11:48:51.105599 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0122 11:48:51.106804 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1158037108/tls.crt::/tmp/serving-cert-1158037108/tls.key\\\\\\\"\\\\nI0122 11:48:52.103234 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 11:48:52.104987 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 11:48:52.105003 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 11:48:52.105030 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 11:48:52.105035 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 11:48:52.112491 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 11:48:52.112515 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 11:48:52.112520 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 11:48:52.112524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 11:48:52.112528 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 11:48:52.112531 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 11:48:52.112534 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 11:48:52.112540 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 11:48:52.115022 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:48:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fbc73082c8fc6e4c53f063e1d1446fff9c541a208f3ab11d7c687b5b06958ebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8940990b4eeab47177be3a76a9fc4894d28308e94e4c45050915ec740b778a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8940990b4eeab47177be3a76a9fc4894d28308e94e4c45050915ec740b778a41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.621026 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4822d3cd-955f-493d-a818-acebb52b3602\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://caf1ed97ccb35c8ce9c3321194645452c5875bdadb4b2634d00114c1cedc1056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://91363fceef321ca9f1495cd188f848fae974f94b1b5732adbab842efc578074c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ad731d2d8530eae95dec603d9f7a060ea885c926d453b983464949e2eb4fc2d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d3ad48ffe8f14cdb9c09a6ed7b7da5d4db116a1dac0653103da063524734f466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3ad48ffe8f14cdb9c09a6ed7b7da5d4db116a1dac0653103da063524734f466\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.629675 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7027ae84-efaa-474d-9221-28d77dc0af15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fa31f4d5e4e6f36d31ea882d29804b21ad3c620e6f31cf12aec3085ed0f9f9b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0f232b2402a84370f16fcd5fe49fb57391d5d49d1df96442b937914a9ad6ad54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f232b2402a84370f16fcd5fe49fb57391d5d49d1df96442b937914a9ad6ad54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.638692 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.638727 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.638738 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.638752 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.638761 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:25Z","lastTransitionTime":"2026-01-22T11:49:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.639917 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.648100 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tf9nb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9f485fd-0793-40a0-abf8-12fd3b612c87\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdqkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tf9nb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.658867 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdb50da0-eb06-4959-b8da-70919924f77e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lt4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lt4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-xzh79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.667810 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.676294 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.682805 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-wrdkl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eaa5719f-fed8-44ac-a759-d2c22d9a2a7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgcrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wrdkl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.693325 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rg989" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97df0621-ddba-4462-8134-59bc671c7351\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rg989\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.702491 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-4lzht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz7fj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4lzht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.714690 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc6361ac-72d0-485c-938e-c58010f57d78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://4b2fc2ec264e1a2f47ef48ae3682ece70e9bcb0c27191badb3dbb25d763d6ed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://d8530587a7dacf7f1e414d966e228d915e25d07d268990a0cbd418ca534f37e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7d6d0b4ca0fcc7c60a642256079a5ccee5482c56dd372189b46a95401451fa45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d115df90471eae10a65aefb390195da3593e903d0ad1a730847db2d29a63cc7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.739512 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39c0a299-bb61-4f5d-8177-544cd4abe1ad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://032f1b1cf07b4a93c23326f05479f43fba3a3cf6bb4b9f6c3ae29a76050edfe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:49Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://4985527bf2ab9cc933f70f9ea2994a77482f8a24299c8efc8321a3fd5d86a203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:49Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://209c9652e04417a0d9d549aa169eae5834fadfd0f9dca2eb8620fc81f999192a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:49Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7276e56b446c98c69bd713b22bf844b5cae42b8a0d8da7b8fb151efc140381ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:49Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://36ea9f6809070fa9f7f4b7e5c40fae1648814d3b300a273a28c80ea6035f76a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://74e9a1ca4941ec2eb248aac427dc7bbbb75c43b4680680c221c5eaf186b5986b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74e9a1ca4941ec2eb248aac427dc7bbbb75c43b4680680c221c5eaf186b5986b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://028713bf3e9d1dc75729378d49c58defe47bb7fc8dadd99d93e91304cec6cf84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://028713bf3e9d1dc75729378d49c58defe47bb7fc8dadd99d93e91304cec6cf84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://6b3b9f5c7630e7e80fee0c6bceb378b3069a777f25552b1f309325e0a12134ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b3b9f5c7630e7e80fee0c6bceb378b3069a777f25552b1f309325e0a12134ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.741359 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.741424 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.741436 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.741452 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.741462 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:25Z","lastTransitionTime":"2026-01-22T11:49:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.752855 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.764193 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://57cccfe61e8f332b9a2398e2ca5f128b7473e871fd825bfdbb35d9ba91022b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:49:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scbgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://850c532d98a8bbc54351ca3b791b2314fd23331e43f96e8f0161ba791781ae24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:49:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scbgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dq269\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.772850 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ldwx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dababdca-8afb-452f-865f-54de3aec21d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kndcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kndcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ldwx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.783855 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.843417 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.843486 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.843500 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.843516 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.843527 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:25Z","lastTransitionTime":"2026-01-22T11:49:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.945422 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.945516 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.945527 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.945540 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.945550 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:25Z","lastTransitionTime":"2026-01-22T11:49:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.048406 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.048470 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.048482 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.048504 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.048518 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:26Z","lastTransitionTime":"2026-01-22T11:49:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.152073 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.152152 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.152171 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.152200 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.152221 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:26Z","lastTransitionTime":"2026-01-22T11:49:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.252110 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.252171 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.252188 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.252206 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.252222 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:26Z","lastTransitionTime":"2026-01-22T11:49:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:26 crc kubenswrapper[5120]: E0122 11:49:26.268596 5120 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"60403ab6-2e1e-4736-9a34-cfc7e1924d0b\\\",\\\"systemUUID\\\":\\\"382cdad4-0171-4b64-8e1b-b8f3f02e6a19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.273418 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.273451 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.273460 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.273473 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.273482 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:26Z","lastTransitionTime":"2026-01-22T11:49:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:26 crc kubenswrapper[5120]: E0122 11:49:26.288531 5120 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"60403ab6-2e1e-4736-9a34-cfc7e1924d0b\\\",\\\"systemUUID\\\":\\\"382cdad4-0171-4b64-8e1b-b8f3f02e6a19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.292746 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.292819 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.292832 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.292868 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.292880 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:26Z","lastTransitionTime":"2026-01-22T11:49:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:26 crc kubenswrapper[5120]: E0122 11:49:26.304661 5120 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"60403ab6-2e1e-4736-9a34-cfc7e1924d0b\\\",\\\"systemUUID\\\":\\\"382cdad4-0171-4b64-8e1b-b8f3f02e6a19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.309222 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.309300 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.309321 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.309359 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.309408 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:26Z","lastTransitionTime":"2026-01-22T11:49:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:26 crc kubenswrapper[5120]: E0122 11:49:26.322263 5120 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"60403ab6-2e1e-4736-9a34-cfc7e1924d0b\\\",\\\"systemUUID\\\":\\\"382cdad4-0171-4b64-8e1b-b8f3f02e6a19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.329112 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.329154 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.329170 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.329188 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.329202 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:26Z","lastTransitionTime":"2026-01-22T11:49:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:26 crc kubenswrapper[5120]: E0122 11:49:26.341173 5120 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"60403ab6-2e1e-4736-9a34-cfc7e1924d0b\\\",\\\"systemUUID\\\":\\\"382cdad4-0171-4b64-8e1b-b8f3f02e6a19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:26 crc kubenswrapper[5120]: E0122 11:49:26.341350 5120 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.342651 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.342698 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.342709 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.342727 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.342738 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:26Z","lastTransitionTime":"2026-01-22T11:49:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.445779 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.445840 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.445855 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.445876 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.445892 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:26Z","lastTransitionTime":"2026-01-22T11:49:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.547641 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.548031 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.548044 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.548061 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.548073 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:26Z","lastTransitionTime":"2026-01-22T11:49:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.650331 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.650433 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.650449 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.650475 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.650488 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:26Z","lastTransitionTime":"2026-01-22T11:49:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.752288 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.752343 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.752357 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.752374 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.752390 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:26Z","lastTransitionTime":"2026-01-22T11:49:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.859514 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.859559 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.859572 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.859592 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.859603 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:26Z","lastTransitionTime":"2026-01-22T11:49:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.961647 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.961702 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.961721 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.961746 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.961763 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:26Z","lastTransitionTime":"2026-01-22T11:49:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.993705 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"98adc11275c61fcc437ee7afbd57096d086ee979acd0013b5c59c635048f3ac3"} Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.993773 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"c84e5a6ed25fd1100d4cbdf237cc499dbd601f84526ab419d876a0dce61d0501"} Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.005023 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410ef417-8c38-4aac-9a75-c1a938b0cf8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://911cf90f454467de717e1f9bb20b825a5be262103e70d8507cf0069f6044f56c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3ebb490a3adef5a0bb92ba36215125157bd696a19543743e029f6ef8d7ddaf9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://64d17043c5bd9fe7e126416520a376da7a3779ed00b20eb4d36e1651e0e4deb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://99b634350c36056ac94a43bb1050fb0a41c21441966a10fdfe3aeae30cfd0c2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99b634350c36056ac94a43bb1050fb0a41c21441966a10fdfe3aeae30cfd0c2f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T11:48:52Z\\\",\\\"message\\\":\\\"var.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"ClientsAllowCBOR\\\\\\\" enabled=false\\\\nW0122 11:48:51.105406 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 11:48:51.105599 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0122 11:48:51.106804 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1158037108/tls.crt::/tmp/serving-cert-1158037108/tls.key\\\\\\\"\\\\nI0122 11:48:52.103234 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 11:48:52.104987 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 11:48:52.105003 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 11:48:52.105030 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 11:48:52.105035 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 11:48:52.112491 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 11:48:52.112515 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 11:48:52.112520 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 11:48:52.112524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 11:48:52.112528 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 11:48:52.112531 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 11:48:52.112534 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 11:48:52.112540 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 11:48:52.115022 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:48:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fbc73082c8fc6e4c53f063e1d1446fff9c541a208f3ab11d7c687b5b06958ebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8940990b4eeab47177be3a76a9fc4894d28308e94e4c45050915ec740b778a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8940990b4eeab47177be3a76a9fc4894d28308e94e4c45050915ec740b778a41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.014569 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4822d3cd-955f-493d-a818-acebb52b3602\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://caf1ed97ccb35c8ce9c3321194645452c5875bdadb4b2634d00114c1cedc1056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://91363fceef321ca9f1495cd188f848fae974f94b1b5732adbab842efc578074c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ad731d2d8530eae95dec603d9f7a060ea885c926d453b983464949e2eb4fc2d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d3ad48ffe8f14cdb9c09a6ed7b7da5d4db116a1dac0653103da063524734f466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3ad48ffe8f14cdb9c09a6ed7b7da5d4db116a1dac0653103da063524734f466\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.022636 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7027ae84-efaa-474d-9221-28d77dc0af15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fa31f4d5e4e6f36d31ea882d29804b21ad3c620e6f31cf12aec3085ed0f9f9b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0f232b2402a84370f16fcd5fe49fb57391d5d49d1df96442b937914a9ad6ad54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f232b2402a84370f16fcd5fe49fb57391d5d49d1df96442b937914a9ad6ad54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.032308 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.041585 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tf9nb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9f485fd-0793-40a0-abf8-12fd3b612c87\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdqkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tf9nb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.051585 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdb50da0-eb06-4959-b8da-70919924f77e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lt4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lt4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-xzh79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.063720 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.063763 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.063775 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.063791 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.063801 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:27Z","lastTransitionTime":"2026-01-22T11:49:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.166143 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.166215 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.166229 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.166249 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.166264 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:27Z","lastTransitionTime":"2026-01-22T11:49:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.172479 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=26.172453105 podStartE2EDuration="26.172453105s" podCreationTimestamp="2026-01-22 11:49:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:49:27.172085536 +0000 UTC m=+101.916033907" watchObservedRunningTime="2026-01-22 11:49:27.172453105 +0000 UTC m=+101.916401446" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.240414 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=26.24039668 podStartE2EDuration="26.24039668s" podCreationTimestamp="2026-01-22 11:49:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:49:27.238406301 +0000 UTC m=+101.982354832" watchObservedRunningTime="2026-01-22 11:49:27.24039668 +0000 UTC m=+101.984345021" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.268670 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.268712 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.268721 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.268735 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.268746 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:27Z","lastTransitionTime":"2026-01-22T11:49:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.272450 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podStartSLOduration=82.272425105 podStartE2EDuration="1m22.272425105s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:49:27.271700528 +0000 UTC m=+102.015648869" watchObservedRunningTime="2026-01-22 11:49:27.272425105 +0000 UTC m=+102.016373446" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.370974 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.371034 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.371048 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.371071 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.371085 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:27Z","lastTransitionTime":"2026-01-22T11:49:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.473047 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.473097 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.473110 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.473127 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.473141 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:27Z","lastTransitionTime":"2026-01-22T11:49:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.571413 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.571605 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 11:49:27 crc kubenswrapper[5120]: E0122 11:49:27.571641 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 22 11:49:27 crc kubenswrapper[5120]: E0122 11:49:27.571697 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.571715 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 11:49:27 crc kubenswrapper[5120]: E0122 11:49:27.571788 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.571868 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:49:27 crc kubenswrapper[5120]: E0122 11:49:27.571966 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ldwx4" podUID="dababdca-8afb-452f-865f-54de3aec21d9" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.575312 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.575366 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.575377 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.575392 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.575404 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:27Z","lastTransitionTime":"2026-01-22T11:49:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.677909 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.677984 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.677997 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.678012 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.678022 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:27Z","lastTransitionTime":"2026-01-22T11:49:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.780454 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.780544 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.780559 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.780577 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.780589 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:27Z","lastTransitionTime":"2026-01-22T11:49:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.883396 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.883439 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.883450 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.883464 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.883475 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:27Z","lastTransitionTime":"2026-01-22T11:49:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.985759 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.985812 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.985821 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.985835 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.985845 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:27Z","lastTransitionTime":"2026-01-22T11:49:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.088575 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.088635 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.088648 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.088672 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.088687 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:28Z","lastTransitionTime":"2026-01-22T11:49:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.191661 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.191720 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.191731 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.191747 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.191757 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:28Z","lastTransitionTime":"2026-01-22T11:49:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.293942 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.294018 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.294033 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.294051 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.294063 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:28Z","lastTransitionTime":"2026-01-22T11:49:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.395827 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.395878 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.395891 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.395907 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.395918 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:28Z","lastTransitionTime":"2026-01-22T11:49:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.498064 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.498116 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.498133 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.498149 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.498160 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:28Z","lastTransitionTime":"2026-01-22T11:49:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.601018 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.601057 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.601067 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.601083 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.601092 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:28Z","lastTransitionTime":"2026-01-22T11:49:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.704389 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.704452 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.704466 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.704544 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.704569 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:28Z","lastTransitionTime":"2026-01-22T11:49:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.806509 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.806568 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.806579 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.806600 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.806617 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:28Z","lastTransitionTime":"2026-01-22T11:49:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.909215 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.909274 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.909307 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.909326 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.909339 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:28Z","lastTransitionTime":"2026-01-22T11:49:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.002270 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-tf9nb" event={"ID":"f9f485fd-0793-40a0-abf8-12fd3b612c87","Type":"ContainerStarted","Data":"5a26a20f8db539ea64a8dabdc450533dc213011b1ea84582f770f8da2b853204"} Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.004515 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79" event={"ID":"cdb50da0-eb06-4959-b8da-70919924f77e","Type":"ContainerStarted","Data":"53d59b7d2c319aaf356a45432146f39c690dafb55e7dcf1cae4ae5ee99919935"} Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.004588 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79" event={"ID":"cdb50da0-eb06-4959-b8da-70919924f77e","Type":"ContainerStarted","Data":"b21acaba3cb296157d5914b47ec901abef4ecd818f666b1cfb316d247e9b6411"} Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.010912 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.010997 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.011009 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.011026 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.011037 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:29Z","lastTransitionTime":"2026-01-22T11:49:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.041779 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=28.041742101 podStartE2EDuration="28.041742101s" podCreationTimestamp="2026-01-22 11:49:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:49:29.041167016 +0000 UTC m=+103.785115357" watchObservedRunningTime="2026-01-22 11:49:29.041742101 +0000 UTC m=+103.785690452" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.075120 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=28.075088328 podStartE2EDuration="28.075088328s" podCreationTimestamp="2026-01-22 11:49:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:49:29.055693783 +0000 UTC m=+103.799642144" watchObservedRunningTime="2026-01-22 11:49:29.075088328 +0000 UTC m=+103.819036709" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.104524 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-tf9nb" podStartSLOduration=84.104504339 podStartE2EDuration="1m24.104504339s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:49:29.090600018 +0000 UTC m=+103.834548369" watchObservedRunningTime="2026-01-22 11:49:29.104504339 +0000 UTC m=+103.848452680" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.113218 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.113295 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.113312 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.113347 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.113366 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:29Z","lastTransitionTime":"2026-01-22T11:49:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.124299 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79" podStartSLOduration=84.124278213 podStartE2EDuration="1m24.124278213s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:49:29.124199691 +0000 UTC m=+103.868148032" watchObservedRunningTime="2026-01-22 11:49:29.124278213 +0000 UTC m=+103.868226554" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.215010 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.215063 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.215081 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.215098 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.215109 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:29Z","lastTransitionTime":"2026-01-22T11:49:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.317777 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.317871 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.317894 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.317925 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.317948 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:29Z","lastTransitionTime":"2026-01-22T11:49:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.421213 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.421269 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.421279 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.421300 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.421311 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:29Z","lastTransitionTime":"2026-01-22T11:49:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.523697 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.523765 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.523779 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.523998 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.524015 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:29Z","lastTransitionTime":"2026-01-22T11:49:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.571109 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 11:49:29 crc kubenswrapper[5120]: E0122 11:49:29.571235 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.571310 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.571356 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 11:49:29 crc kubenswrapper[5120]: E0122 11:49:29.571464 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 22 11:49:29 crc kubenswrapper[5120]: E0122 11:49:29.571521 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.571552 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:49:29 crc kubenswrapper[5120]: E0122 11:49:29.571628 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ldwx4" podUID="dababdca-8afb-452f-865f-54de3aec21d9" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.626581 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.626637 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.626649 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.626665 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.626674 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:29Z","lastTransitionTime":"2026-01-22T11:49:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.729006 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.729056 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.729065 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.729082 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.729092 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:29Z","lastTransitionTime":"2026-01-22T11:49:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.831417 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.831472 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.831484 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.831504 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.831516 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:29Z","lastTransitionTime":"2026-01-22T11:49:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.933943 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.934007 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.934016 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.934034 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.934045 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:29Z","lastTransitionTime":"2026-01-22T11:49:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.035524 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.035573 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.035582 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.035595 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.035605 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:30Z","lastTransitionTime":"2026-01-22T11:49:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.138315 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.138385 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.138399 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.138419 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.138435 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:30Z","lastTransitionTime":"2026-01-22T11:49:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.241118 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.241182 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.241199 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.241218 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.241235 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:30Z","lastTransitionTime":"2026-01-22T11:49:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.344265 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.344394 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.344408 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.344423 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.344433 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:30Z","lastTransitionTime":"2026-01-22T11:49:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.447743 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.447817 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.447831 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.447853 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.447871 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:30Z","lastTransitionTime":"2026-01-22T11:49:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.550393 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.550457 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.550470 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.550490 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.550503 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:30Z","lastTransitionTime":"2026-01-22T11:49:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.573216 5120 scope.go:117] "RemoveContainer" containerID="99b634350c36056ac94a43bb1050fb0a41c21441966a10fdfe3aeae30cfd0c2f" Jan 22 11:49:30 crc kubenswrapper[5120]: E0122 11:49:30.573585 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.654781 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.655349 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.655367 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.655389 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.655404 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:30Z","lastTransitionTime":"2026-01-22T11:49:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.757766 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.757835 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.757848 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.757870 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.757883 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:30Z","lastTransitionTime":"2026-01-22T11:49:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.860110 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.860179 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.860192 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.860214 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.860230 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:30Z","lastTransitionTime":"2026-01-22T11:49:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.963949 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.964054 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.964072 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.964098 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.964110 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:30Z","lastTransitionTime":"2026-01-22T11:49:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.015024 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"b9f937f5e3872af6c060d152d7740bf273be6070248e28fee7ad3af6a194ef09"} Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.018557 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-wrdkl" event={"ID":"eaa5719f-fed8-44ac-a759-d2c22d9a2a7f","Type":"ContainerStarted","Data":"b11f230eb0d79f0c57e2b3e60b36d832b324f6a02f94ba8d75924b3605e32a7d"} Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.021754 5120 generic.go:358] "Generic (PLEG): container finished" podID="97df0621-ddba-4462-8134-59bc671c7351" containerID="310ec001d9a4dce7a548d57b1f0b1cdcd52e5b7937bc72e95db5b1033742786b" exitCode=0 Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.021851 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rg989" event={"ID":"97df0621-ddba-4462-8134-59bc671c7351","Type":"ContainerDied","Data":"310ec001d9a4dce7a548d57b1f0b1cdcd52e5b7937bc72e95db5b1033742786b"} Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.067147 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.067211 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.067225 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.067246 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.067261 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:31Z","lastTransitionTime":"2026-01-22T11:49:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.077980 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-wrdkl" podStartSLOduration=87.077931026 podStartE2EDuration="1m27.077931026s" podCreationTimestamp="2026-01-22 11:48:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:49:31.053079027 +0000 UTC m=+105.797027428" watchObservedRunningTime="2026-01-22 11:49:31.077931026 +0000 UTC m=+105.821879367" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.169994 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.170063 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.170080 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.170102 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.170120 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:31Z","lastTransitionTime":"2026-01-22T11:49:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.273432 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.273503 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.273517 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.273539 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.273554 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:31Z","lastTransitionTime":"2026-01-22T11:49:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.376811 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.376873 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.376888 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.376905 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.376917 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:31Z","lastTransitionTime":"2026-01-22T11:49:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.480321 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.480422 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.480444 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.480482 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.480505 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:31Z","lastTransitionTime":"2026-01-22T11:49:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.571014 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.571644 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.571729 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 11:49:31 crc kubenswrapper[5120]: E0122 11:49:31.572049 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ldwx4" podUID="dababdca-8afb-452f-865f-54de3aec21d9" Jan 22 11:49:31 crc kubenswrapper[5120]: E0122 11:49:31.572714 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 22 11:49:31 crc kubenswrapper[5120]: E0122 11:49:31.572900 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.573098 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:31 crc kubenswrapper[5120]: E0122 11:49:31.573241 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.583556 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.583640 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.583653 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.583671 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.583682 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:31Z","lastTransitionTime":"2026-01-22T11:49:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.686729 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.686801 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.686822 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.686852 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.686873 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:31Z","lastTransitionTime":"2026-01-22T11:49:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.789935 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.790020 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.790037 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.790060 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.790072 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:31Z","lastTransitionTime":"2026-01-22T11:49:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.892914 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.893006 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.893022 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.893044 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.893059 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:31Z","lastTransitionTime":"2026-01-22T11:49:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.996632 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.996691 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.996707 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.996728 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.996746 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:31Z","lastTransitionTime":"2026-01-22T11:49:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.028130 5120 generic.go:358] "Generic (PLEG): container finished" podID="97df0621-ddba-4462-8134-59bc671c7351" containerID="3893663ea5a85fdb7a9ba62aff94b278d0d941f8da598a8444fcdaaa8a0a96fa" exitCode=0 Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.028238 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rg989" event={"ID":"97df0621-ddba-4462-8134-59bc671c7351","Type":"ContainerDied","Data":"3893663ea5a85fdb7a9ba62aff94b278d0d941f8da598a8444fcdaaa8a0a96fa"} Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.033467 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"c76cdb48f202911a3d0b51441046ec86c1d066a9c70e94de7578c6d134092895"} Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.035700 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4lzht" event={"ID":"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087","Type":"ContainerStarted","Data":"d29b8141fbabedfe7a0b24544216f57974fa5374814f1bca04930180d84aef59"} Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.037516 5120 generic.go:358] "Generic (PLEG): container finished" podID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerID="3779fe53a1bd1ecb3df812f8ab103a8b1e9c3b1c7d9ac86e1b961d20be69d356" exitCode=0 Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.037596 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" event={"ID":"dd62bdde-a6c1-42b3-9585-ba64c63cbb51","Type":"ContainerDied","Data":"3779fe53a1bd1ecb3df812f8ab103a8b1e9c3b1c7d9ac86e1b961d20be69d356"} Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.078418 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-4lzht" podStartSLOduration=87.078382032 podStartE2EDuration="1m27.078382032s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:49:32.07831452 +0000 UTC m=+106.822262901" watchObservedRunningTime="2026-01-22 11:49:32.078382032 +0000 UTC m=+106.822330393" Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.100267 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.100314 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.100324 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.100339 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.100349 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:32Z","lastTransitionTime":"2026-01-22T11:49:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.215082 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.215536 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.215553 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.215570 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.215582 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:32Z","lastTransitionTime":"2026-01-22T11:49:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.322908 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.322948 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.322973 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.322988 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.323000 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:32Z","lastTransitionTime":"2026-01-22T11:49:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.425456 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.425511 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.425526 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.425547 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.425562 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:32Z","lastTransitionTime":"2026-01-22T11:49:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.528342 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.528414 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.528426 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.528449 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.528461 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:32Z","lastTransitionTime":"2026-01-22T11:49:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.631186 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.631258 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.631274 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.631297 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.631314 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:32Z","lastTransitionTime":"2026-01-22T11:49:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.734413 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.734469 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.734480 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.734499 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.734509 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:32Z","lastTransitionTime":"2026-01-22T11:49:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.837196 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.837306 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.837333 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.837362 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.837384 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:32Z","lastTransitionTime":"2026-01-22T11:49:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.940481 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.940564 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.940582 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.940610 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.940630 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:32Z","lastTransitionTime":"2026-01-22T11:49:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.057002 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.057144 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.057162 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.057191 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.057215 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:33Z","lastTransitionTime":"2026-01-22T11:49:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.161281 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.161337 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.161358 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.161374 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.161385 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:33Z","lastTransitionTime":"2026-01-22T11:49:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.264443 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.264513 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.264667 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.264765 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.264793 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:33Z","lastTransitionTime":"2026-01-22T11:49:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.334875 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.335114 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:33 crc kubenswrapper[5120]: E0122 11:49:33.335159 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 11:49:33 crc kubenswrapper[5120]: E0122 11:49:33.335227 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 11:49:33 crc kubenswrapper[5120]: E0122 11:49:33.335307 5120 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.335316 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:33 crc kubenswrapper[5120]: E0122 11:49:33.335379 5120 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 11:49:33 crc kubenswrapper[5120]: E0122 11:49:33.335481 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-22 11:50:05.335405664 +0000 UTC m=+140.079354005 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:49:33 crc kubenswrapper[5120]: E0122 11:49:33.335520 5120 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 11:49:33 crc kubenswrapper[5120]: E0122 11:49:33.335526 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-22 11:50:05.335508536 +0000 UTC m=+140.079456877 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.335579 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 11:49:33 crc kubenswrapper[5120]: E0122 11:49:33.335669 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 11:49:33 crc kubenswrapper[5120]: E0122 11:49:33.335673 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-22 11:50:05.335649411 +0000 UTC m=+140.079597752 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 11:49:33 crc kubenswrapper[5120]: E0122 11:49:33.335681 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 11:49:33 crc kubenswrapper[5120]: E0122 11:49:33.335694 5120 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:49:33 crc kubenswrapper[5120]: E0122 11:49:33.335747 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-22 11:50:05.335727652 +0000 UTC m=+140.079675993 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.368374 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.368459 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.368479 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.368507 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.368652 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:33Z","lastTransitionTime":"2026-01-22T11:49:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.437414 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:49:33 crc kubenswrapper[5120]: E0122 11:49:33.437997 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:05.437843755 +0000 UTC m=+140.181792126 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.473726 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.473794 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.473813 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.474028 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.474048 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:33Z","lastTransitionTime":"2026-01-22T11:49:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.571700 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.571684 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.571936 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.571945 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:33 crc kubenswrapper[5120]: E0122 11:49:33.572505 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 22 11:49:33 crc kubenswrapper[5120]: E0122 11:49:33.572672 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 22 11:49:33 crc kubenswrapper[5120]: E0122 11:49:33.572990 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 22 11:49:33 crc kubenswrapper[5120]: E0122 11:49:33.573112 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ldwx4" podUID="dababdca-8afb-452f-865f-54de3aec21d9" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.576756 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.576791 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.576806 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.576829 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.576844 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:33Z","lastTransitionTime":"2026-01-22T11:49:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.602353 5120 generic.go:358] "Generic (PLEG): container finished" podID="97df0621-ddba-4462-8134-59bc671c7351" containerID="1e4c017d60fd56591949c3a9cb6fdffe623b4653c8a74d54fa756a0ec9f724be" exitCode=0 Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.602474 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rg989" event={"ID":"97df0621-ddba-4462-8134-59bc671c7351","Type":"ContainerDied","Data":"1e4c017d60fd56591949c3a9cb6fdffe623b4653c8a74d54fa756a0ec9f724be"} Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.607708 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" event={"ID":"dd62bdde-a6c1-42b3-9585-ba64c63cbb51","Type":"ContainerStarted","Data":"1c8b54f45344390a57a15807f13fc415b25522bda483800e1e6b4e1a80d11f4f"} Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.607756 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" event={"ID":"dd62bdde-a6c1-42b3-9585-ba64c63cbb51","Type":"ContainerStarted","Data":"bb9a1f9ecf9941c93d405464147ed7fce485a179d00bfa3094934d0400409f25"} Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.640257 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dababdca-8afb-452f-865f-54de3aec21d9-metrics-certs\") pod \"network-metrics-daemon-ldwx4\" (UID: \"dababdca-8afb-452f-865f-54de3aec21d9\") " pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:49:33 crc kubenswrapper[5120]: E0122 11:49:33.640507 5120 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 11:49:33 crc kubenswrapper[5120]: E0122 11:49:33.640561 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dababdca-8afb-452f-865f-54de3aec21d9-metrics-certs podName:dababdca-8afb-452f-865f-54de3aec21d9 nodeName:}" failed. No retries permitted until 2026-01-22 11:50:05.640545801 +0000 UTC m=+140.384494152 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dababdca-8afb-452f-865f-54de3aec21d9-metrics-certs") pod "network-metrics-daemon-ldwx4" (UID: "dababdca-8afb-452f-865f-54de3aec21d9") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.678673 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.678711 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.678722 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.678735 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.678744 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:33Z","lastTransitionTime":"2026-01-22T11:49:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.780352 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.780402 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.780413 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.780429 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.780439 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:33Z","lastTransitionTime":"2026-01-22T11:49:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.884976 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.885027 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.885040 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.885056 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.885070 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:33Z","lastTransitionTime":"2026-01-22T11:49:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.986997 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.987051 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.987064 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.987082 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.987096 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:33Z","lastTransitionTime":"2026-01-22T11:49:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.089366 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.089432 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.089447 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.089467 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.089479 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:34Z","lastTransitionTime":"2026-01-22T11:49:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.192224 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.192293 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.192310 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.192332 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.192346 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:34Z","lastTransitionTime":"2026-01-22T11:49:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.294661 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.294754 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.294769 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.294796 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.294811 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:34Z","lastTransitionTime":"2026-01-22T11:49:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.404164 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.404214 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.404226 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.404244 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.404254 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:34Z","lastTransitionTime":"2026-01-22T11:49:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.511774 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.511832 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.511845 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.511862 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.511874 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:34Z","lastTransitionTime":"2026-01-22T11:49:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.612450 5120 generic.go:358] "Generic (PLEG): container finished" podID="97df0621-ddba-4462-8134-59bc671c7351" containerID="74e5daf9f7179d097931f8055d630e02712aaa4ef010292832f9de7652b7cbdc" exitCode=0 Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.612539 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rg989" event={"ID":"97df0621-ddba-4462-8134-59bc671c7351","Type":"ContainerDied","Data":"74e5daf9f7179d097931f8055d630e02712aaa4ef010292832f9de7652b7cbdc"} Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.614062 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.614102 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.614119 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.614134 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.614147 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:34Z","lastTransitionTime":"2026-01-22T11:49:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.617097 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" event={"ID":"dd62bdde-a6c1-42b3-9585-ba64c63cbb51","Type":"ContainerStarted","Data":"fa6924cab3fb62a3d082f9ba370a96e5e7ab2d47c44c268324b727cb6cfbcd31"} Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.617148 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" event={"ID":"dd62bdde-a6c1-42b3-9585-ba64c63cbb51","Type":"ContainerStarted","Data":"a52fe62265acc53f59227988efecf2209707222abdac4d713d0a858d3eeb31cf"} Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.617157 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" event={"ID":"dd62bdde-a6c1-42b3-9585-ba64c63cbb51","Type":"ContainerStarted","Data":"f092db392417f256b4f0135f1ff3ff3d4129b64b53982c580d3655bc52b38860"} Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.617165 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" event={"ID":"dd62bdde-a6c1-42b3-9585-ba64c63cbb51","Type":"ContainerStarted","Data":"e6f598572d7ee3f4456ac54c210e204149f4a9ec71c387867d3b396283eafec7"} Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.717507 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.717547 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.717560 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.717577 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.717590 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:34Z","lastTransitionTime":"2026-01-22T11:49:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.819926 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.819990 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.820004 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.820022 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.820034 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:34Z","lastTransitionTime":"2026-01-22T11:49:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.927815 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.927896 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.927916 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.927941 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.927975 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:34Z","lastTransitionTime":"2026-01-22T11:49:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.030158 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.030237 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.030261 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.030286 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.030301 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:35Z","lastTransitionTime":"2026-01-22T11:49:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.133278 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.133353 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.133373 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.133396 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.133410 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:35Z","lastTransitionTime":"2026-01-22T11:49:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.235595 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.235654 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.235672 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.235696 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.235714 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:35Z","lastTransitionTime":"2026-01-22T11:49:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.338673 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.338748 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.338766 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.338790 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.338806 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:35Z","lastTransitionTime":"2026-01-22T11:49:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.444711 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.444808 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.444827 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.444867 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.444882 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:35Z","lastTransitionTime":"2026-01-22T11:49:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.548212 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.548274 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.548288 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.548311 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.548325 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:35Z","lastTransitionTime":"2026-01-22T11:49:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.573704 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.573842 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 11:49:35 crc kubenswrapper[5120]: E0122 11:49:35.573856 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.574019 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:35 crc kubenswrapper[5120]: E0122 11:49:35.574209 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 22 11:49:35 crc kubenswrapper[5120]: E0122 11:49:35.574369 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.574415 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:49:35 crc kubenswrapper[5120]: E0122 11:49:35.574474 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ldwx4" podUID="dababdca-8afb-452f-865f-54de3aec21d9" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.628008 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rg989" event={"ID":"97df0621-ddba-4462-8134-59bc671c7351","Type":"ContainerStarted","Data":"e3c3c822d2a64996a2c76d93e02f2509fd39119c3b5870208ceeb5df9ac81da7"} Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.652774 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.652849 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.652865 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.652888 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.652910 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:35Z","lastTransitionTime":"2026-01-22T11:49:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.755037 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.755082 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.755092 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.755109 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.755120 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:35Z","lastTransitionTime":"2026-01-22T11:49:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.857334 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.857719 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.857829 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.857948 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.858088 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:35Z","lastTransitionTime":"2026-01-22T11:49:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.960633 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.960706 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.960722 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.960743 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.960759 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:35Z","lastTransitionTime":"2026-01-22T11:49:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.063322 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.063383 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.063395 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.063408 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.063424 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:36Z","lastTransitionTime":"2026-01-22T11:49:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.166244 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.166291 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.166302 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.166316 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.166326 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:36Z","lastTransitionTime":"2026-01-22T11:49:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.269006 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.269306 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.269421 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.269517 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.269607 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:36Z","lastTransitionTime":"2026-01-22T11:49:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.372981 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.373059 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.373082 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.373109 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.373127 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:36Z","lastTransitionTime":"2026-01-22T11:49:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.475190 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.475234 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.475246 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.475261 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.475272 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:36Z","lastTransitionTime":"2026-01-22T11:49:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.526191 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.526241 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.526253 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.526270 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.526282 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:36Z","lastTransitionTime":"2026-01-22T11:49:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.573668 5120 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.574895 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qpcc4"] Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.577727 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qpcc4" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.580694 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.581525 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.582075 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.582104 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.585278 5120 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.643094 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" event={"ID":"dd62bdde-a6c1-42b3-9585-ba64c63cbb51","Type":"ContainerStarted","Data":"3f54e9ea68daffd338ce4d1b48fc95b48c8f4454371da3d34787786d2ec02aac"} Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.645071 5120 generic.go:358] "Generic (PLEG): container finished" podID="97df0621-ddba-4462-8134-59bc671c7351" containerID="e3c3c822d2a64996a2c76d93e02f2509fd39119c3b5870208ceeb5df9ac81da7" exitCode=0 Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.645105 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rg989" event={"ID":"97df0621-ddba-4462-8134-59bc671c7351","Type":"ContainerDied","Data":"e3c3c822d2a64996a2c76d93e02f2509fd39119c3b5870208ceeb5df9ac81da7"} Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.679853 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/24792411-b989-4171-80eb-92ec2002d172-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-qpcc4\" (UID: \"24792411-b989-4171-80eb-92ec2002d172\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qpcc4" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.679899 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24792411-b989-4171-80eb-92ec2002d172-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-qpcc4\" (UID: \"24792411-b989-4171-80eb-92ec2002d172\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qpcc4" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.679924 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/24792411-b989-4171-80eb-92ec2002d172-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-qpcc4\" (UID: \"24792411-b989-4171-80eb-92ec2002d172\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qpcc4" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.679942 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/24792411-b989-4171-80eb-92ec2002d172-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-qpcc4\" (UID: \"24792411-b989-4171-80eb-92ec2002d172\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qpcc4" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.680018 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/24792411-b989-4171-80eb-92ec2002d172-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-qpcc4\" (UID: \"24792411-b989-4171-80eb-92ec2002d172\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qpcc4" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.780731 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/24792411-b989-4171-80eb-92ec2002d172-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-qpcc4\" (UID: \"24792411-b989-4171-80eb-92ec2002d172\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qpcc4" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.780779 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24792411-b989-4171-80eb-92ec2002d172-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-qpcc4\" (UID: \"24792411-b989-4171-80eb-92ec2002d172\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qpcc4" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.780805 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/24792411-b989-4171-80eb-92ec2002d172-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-qpcc4\" (UID: \"24792411-b989-4171-80eb-92ec2002d172\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qpcc4" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.780827 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/24792411-b989-4171-80eb-92ec2002d172-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-qpcc4\" (UID: \"24792411-b989-4171-80eb-92ec2002d172\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qpcc4" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.781061 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/24792411-b989-4171-80eb-92ec2002d172-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-qpcc4\" (UID: \"24792411-b989-4171-80eb-92ec2002d172\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qpcc4" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.781107 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/24792411-b989-4171-80eb-92ec2002d172-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-qpcc4\" (UID: \"24792411-b989-4171-80eb-92ec2002d172\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qpcc4" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.781175 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/24792411-b989-4171-80eb-92ec2002d172-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-qpcc4\" (UID: \"24792411-b989-4171-80eb-92ec2002d172\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qpcc4" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.782090 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/24792411-b989-4171-80eb-92ec2002d172-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-qpcc4\" (UID: \"24792411-b989-4171-80eb-92ec2002d172\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qpcc4" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.796786 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24792411-b989-4171-80eb-92ec2002d172-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-qpcc4\" (UID: \"24792411-b989-4171-80eb-92ec2002d172\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qpcc4" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.802490 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/24792411-b989-4171-80eb-92ec2002d172-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-qpcc4\" (UID: \"24792411-b989-4171-80eb-92ec2002d172\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qpcc4" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.891349 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qpcc4" Jan 22 11:49:36 crc kubenswrapper[5120]: W0122 11:49:36.932080 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24792411_b989_4171_80eb_92ec2002d172.slice/crio-b87a1b613da6f82f7d0ff920d604a739c14a6e26e4c01c3c89773fe8fbe2037f WatchSource:0}: Error finding container b87a1b613da6f82f7d0ff920d604a739c14a6e26e4c01c3c89773fe8fbe2037f: Status 404 returned error can't find the container with id b87a1b613da6f82f7d0ff920d604a739c14a6e26e4c01c3c89773fe8fbe2037f Jan 22 11:49:37 crc kubenswrapper[5120]: I0122 11:49:37.571250 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 11:49:37 crc kubenswrapper[5120]: I0122 11:49:37.571261 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 11:49:37 crc kubenswrapper[5120]: I0122 11:49:37.571301 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:37 crc kubenswrapper[5120]: E0122 11:49:37.571385 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 22 11:49:37 crc kubenswrapper[5120]: E0122 11:49:37.571505 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 22 11:49:37 crc kubenswrapper[5120]: E0122 11:49:37.571725 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 22 11:49:37 crc kubenswrapper[5120]: I0122 11:49:37.571786 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:49:37 crc kubenswrapper[5120]: E0122 11:49:37.572001 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ldwx4" podUID="dababdca-8afb-452f-865f-54de3aec21d9" Jan 22 11:49:37 crc kubenswrapper[5120]: I0122 11:49:37.654344 5120 generic.go:358] "Generic (PLEG): container finished" podID="97df0621-ddba-4462-8134-59bc671c7351" containerID="fbe1c72e23aac177d08f8889b1c095634d89ea3a7fa0c703aa47e19a45c6274c" exitCode=0 Jan 22 11:49:37 crc kubenswrapper[5120]: I0122 11:49:37.654485 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rg989" event={"ID":"97df0621-ddba-4462-8134-59bc671c7351","Type":"ContainerDied","Data":"fbe1c72e23aac177d08f8889b1c095634d89ea3a7fa0c703aa47e19a45c6274c"} Jan 22 11:49:37 crc kubenswrapper[5120]: I0122 11:49:37.656783 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qpcc4" event={"ID":"24792411-b989-4171-80eb-92ec2002d172","Type":"ContainerStarted","Data":"88e29354fcf1df2f1d68a6d530f454844c505540da80d37c683ddef0606d2cb4"} Jan 22 11:49:37 crc kubenswrapper[5120]: I0122 11:49:37.656850 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qpcc4" event={"ID":"24792411-b989-4171-80eb-92ec2002d172","Type":"ContainerStarted","Data":"b87a1b613da6f82f7d0ff920d604a739c14a6e26e4c01c3c89773fe8fbe2037f"} Jan 22 11:49:37 crc kubenswrapper[5120]: I0122 11:49:37.705214 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qpcc4" podStartSLOduration=92.705194912 podStartE2EDuration="1m32.705194912s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:49:37.704096026 +0000 UTC m=+112.448044387" watchObservedRunningTime="2026-01-22 11:49:37.705194912 +0000 UTC m=+112.449143253" Jan 22 11:49:38 crc kubenswrapper[5120]: I0122 11:49:38.666292 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rg989" event={"ID":"97df0621-ddba-4462-8134-59bc671c7351","Type":"ContainerStarted","Data":"9f28c7cde882aaba8df3805668fda0e1e1c980daebff4ea6b32dec7ab2b631de"} Jan 22 11:49:39 crc kubenswrapper[5120]: I0122 11:49:39.571377 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 11:49:39 crc kubenswrapper[5120]: I0122 11:49:39.571432 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 11:49:39 crc kubenswrapper[5120]: I0122 11:49:39.571377 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:49:39 crc kubenswrapper[5120]: E0122 11:49:39.571580 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 22 11:49:39 crc kubenswrapper[5120]: I0122 11:49:39.571380 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:39 crc kubenswrapper[5120]: E0122 11:49:39.571673 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 22 11:49:39 crc kubenswrapper[5120]: E0122 11:49:39.571490 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 22 11:49:39 crc kubenswrapper[5120]: E0122 11:49:39.571677 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ldwx4" podUID="dababdca-8afb-452f-865f-54de3aec21d9" Jan 22 11:49:39 crc kubenswrapper[5120]: I0122 11:49:39.673409 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" event={"ID":"dd62bdde-a6c1-42b3-9585-ba64c63cbb51","Type":"ContainerStarted","Data":"29c29478ae7505ea16587db05884339bd9c66ee1da87d8da71e4d78fa0821e42"} Jan 22 11:49:39 crc kubenswrapper[5120]: I0122 11:49:39.673887 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:39 crc kubenswrapper[5120]: I0122 11:49:39.673989 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:39 crc kubenswrapper[5120]: I0122 11:49:39.701903 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-rg989" podStartSLOduration=94.70188464 podStartE2EDuration="1m34.70188464s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:49:38.691403419 +0000 UTC m=+113.435351780" watchObservedRunningTime="2026-01-22 11:49:39.70188464 +0000 UTC m=+114.445832981" Jan 22 11:49:39 crc kubenswrapper[5120]: I0122 11:49:39.702343 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" podStartSLOduration=94.702337832 podStartE2EDuration="1m34.702337832s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:49:39.700023855 +0000 UTC m=+114.443972206" watchObservedRunningTime="2026-01-22 11:49:39.702337832 +0000 UTC m=+114.446286173" Jan 22 11:49:39 crc kubenswrapper[5120]: I0122 11:49:39.740903 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:40 crc kubenswrapper[5120]: I0122 11:49:40.676925 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:40 crc kubenswrapper[5120]: I0122 11:49:40.712302 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:41 crc kubenswrapper[5120]: I0122 11:49:41.571768 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:49:41 crc kubenswrapper[5120]: E0122 11:49:41.571994 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ldwx4" podUID="dababdca-8afb-452f-865f-54de3aec21d9" Jan 22 11:49:41 crc kubenswrapper[5120]: I0122 11:49:41.572173 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 11:49:41 crc kubenswrapper[5120]: E0122 11:49:41.572287 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 22 11:49:41 crc kubenswrapper[5120]: I0122 11:49:41.572358 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:41 crc kubenswrapper[5120]: I0122 11:49:41.572402 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 11:49:41 crc kubenswrapper[5120]: E0122 11:49:41.572464 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 22 11:49:41 crc kubenswrapper[5120]: E0122 11:49:41.572564 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 22 11:49:42 crc kubenswrapper[5120]: I0122 11:49:42.196712 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-ldwx4"] Jan 22 11:49:42 crc kubenswrapper[5120]: I0122 11:49:42.197751 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:49:42 crc kubenswrapper[5120]: E0122 11:49:42.197945 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ldwx4" podUID="dababdca-8afb-452f-865f-54de3aec21d9" Jan 22 11:49:42 crc kubenswrapper[5120]: I0122 11:49:42.571923 5120 scope.go:117] "RemoveContainer" containerID="99b634350c36056ac94a43bb1050fb0a41c21441966a10fdfe3aeae30cfd0c2f" Jan 22 11:49:43 crc kubenswrapper[5120]: I0122 11:49:43.571993 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:49:43 crc kubenswrapper[5120]: I0122 11:49:43.572020 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 11:49:43 crc kubenswrapper[5120]: I0122 11:49:43.571993 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:43 crc kubenswrapper[5120]: I0122 11:49:43.572117 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 11:49:43 crc kubenswrapper[5120]: E0122 11:49:43.572104 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ldwx4" podUID="dababdca-8afb-452f-865f-54de3aec21d9" Jan 22 11:49:43 crc kubenswrapper[5120]: E0122 11:49:43.572186 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 22 11:49:43 crc kubenswrapper[5120]: E0122 11:49:43.572250 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 22 11:49:43 crc kubenswrapper[5120]: E0122 11:49:43.572411 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 22 11:49:43 crc kubenswrapper[5120]: I0122 11:49:43.690903 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 22 11:49:43 crc kubenswrapper[5120]: I0122 11:49:43.692852 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"79545e3bdfa141cbd330789b3726a926a352dee430ef750fa2a4adffc6f4f17b"} Jan 22 11:49:43 crc kubenswrapper[5120]: I0122 11:49:43.693279 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:49:43 crc kubenswrapper[5120]: I0122 11:49:43.714541 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=42.714520027 podStartE2EDuration="42.714520027s" podCreationTimestamp="2026-01-22 11:49:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:49:43.713247925 +0000 UTC m=+118.457196326" watchObservedRunningTime="2026-01-22 11:49:43.714520027 +0000 UTC m=+118.458468378" Jan 22 11:49:45 crc kubenswrapper[5120]: E0122 11:49:45.537848 5120 kubelet_node_status.go:509] "Node not becoming ready in time after startup" Jan 22 11:49:45 crc kubenswrapper[5120]: I0122 11:49:45.572828 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 11:49:45 crc kubenswrapper[5120]: I0122 11:49:45.572948 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:49:45 crc kubenswrapper[5120]: I0122 11:49:45.573018 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 11:49:45 crc kubenswrapper[5120]: E0122 11:49:45.573059 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 22 11:49:45 crc kubenswrapper[5120]: E0122 11:49:45.573099 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 22 11:49:45 crc kubenswrapper[5120]: I0122 11:49:45.573163 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:45 crc kubenswrapper[5120]: E0122 11:49:45.573218 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ldwx4" podUID="dababdca-8afb-452f-865f-54de3aec21d9" Jan 22 11:49:45 crc kubenswrapper[5120]: E0122 11:49:45.573403 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 22 11:49:45 crc kubenswrapper[5120]: E0122 11:49:45.634885 5120 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 22 11:49:47 crc kubenswrapper[5120]: I0122 11:49:47.572311 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 11:49:47 crc kubenswrapper[5120]: I0122 11:49:47.572358 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:49:47 crc kubenswrapper[5120]: I0122 11:49:47.572659 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 11:49:47 crc kubenswrapper[5120]: E0122 11:49:47.572656 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 22 11:49:47 crc kubenswrapper[5120]: E0122 11:49:47.572752 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 22 11:49:47 crc kubenswrapper[5120]: E0122 11:49:47.572824 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ldwx4" podUID="dababdca-8afb-452f-865f-54de3aec21d9" Jan 22 11:49:47 crc kubenswrapper[5120]: I0122 11:49:47.572849 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:47 crc kubenswrapper[5120]: E0122 11:49:47.572892 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 22 11:49:49 crc kubenswrapper[5120]: I0122 11:49:49.571667 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:49:49 crc kubenswrapper[5120]: E0122 11:49:49.571813 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ldwx4" podUID="dababdca-8afb-452f-865f-54de3aec21d9" Jan 22 11:49:49 crc kubenswrapper[5120]: I0122 11:49:49.571664 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 11:49:49 crc kubenswrapper[5120]: I0122 11:49:49.571867 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 11:49:49 crc kubenswrapper[5120]: E0122 11:49:49.572005 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 22 11:49:49 crc kubenswrapper[5120]: E0122 11:49:49.572061 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 22 11:49:49 crc kubenswrapper[5120]: I0122 11:49:49.572688 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:49 crc kubenswrapper[5120]: E0122 11:49:49.572847 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 22 11:49:51 crc kubenswrapper[5120]: I0122 11:49:51.571511 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:51 crc kubenswrapper[5120]: I0122 11:49:51.571677 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:49:51 crc kubenswrapper[5120]: I0122 11:49:51.571727 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 11:49:51 crc kubenswrapper[5120]: I0122 11:49:51.572309 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 11:49:51 crc kubenswrapper[5120]: I0122 11:49:51.575561 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Jan 22 11:49:51 crc kubenswrapper[5120]: I0122 11:49:51.575830 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Jan 22 11:49:51 crc kubenswrapper[5120]: I0122 11:49:51.576259 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Jan 22 11:49:51 crc kubenswrapper[5120]: I0122 11:49:51.576743 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Jan 22 11:49:51 crc kubenswrapper[5120]: I0122 11:49:51.577115 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Jan 22 11:49:51 crc kubenswrapper[5120]: I0122 11:49:51.577530 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Jan 22 11:49:54 crc kubenswrapper[5120]: I0122 11:49:54.707901 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.735094 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeReady" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.770460 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-xw8v9"] Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.787205 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-xmvfk"] Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.787389 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-xw8v9" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.791087 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.791106 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.791551 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.791633 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.793810 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-x2rhp"] Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.794046 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.796805 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-tfhpf"] Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.796973 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-x2rhp" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.799424 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-54c688565-ll2j2"] Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.799620 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-tfhpf" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.802100 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-7smqb"] Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.802297 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-ll2j2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.804492 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-mngf2"] Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.806109 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.806368 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-bg8p2"] Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.806492 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-7smqb" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.806820 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-mngf2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.813937 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.814411 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-fzgnb"] Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.815319 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-bg8p2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.824435 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-67c89758df-6q5kp"] Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.824998 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fzgnb" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.827406 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-rkbh2"] Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.828032 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-6q5kp" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.828869 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.829231 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.829460 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.829812 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.830176 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.830381 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.834529 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.852940 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.853171 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.853694 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.853835 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.853874 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.853924 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.854003 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.854022 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.854203 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.854320 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.854339 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-747b44746d-btnnz"] Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.854426 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.854502 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.854540 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.854514 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.854750 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.854823 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.855114 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.855224 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.855461 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.857122 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.857188 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.857329 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.857419 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.857419 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.857480 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.857568 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.857710 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.857806 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.857894 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.857378 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.858080 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.857813 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.858042 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.858293 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.858336 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.858407 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.858244 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.858636 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.858501 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.859010 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.859054 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-p98m2"] Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.859154 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-btnnz" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.859188 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-rkbh2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.860076 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.862436 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.862564 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.862621 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.862738 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.863990 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.864833 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.870096 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.872065 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.880881 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-v89hk"] Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.885980 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qnw8\" (UniqueName: \"kubernetes.io/projected/dfeef834-363c-4dff-a170-acd203607c65-kube-api-access-8qnw8\") pod \"machine-api-operator-755bb95488-x2rhp\" (UID: \"dfeef834-363c-4dff-a170-acd203607c65\") " pod="openshift-machine-api/machine-api-operator-755bb95488-x2rhp" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.886025 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/fd113660-b734-4d86-be8d-b28c5e9a328f-image-import-ca\") pod \"apiserver-9ddfb9f55-xmvfk\" (UID: \"fd113660-b734-4d86-be8d-b28c5e9a328f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.886045 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba096274-efe0-462b-9a53-89e321166944-config\") pod \"openshift-controller-manager-operator-686468bdd5-mngf2\" (UID: \"ba096274-efe0-462b-9a53-89e321166944\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-mngf2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.886062 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/0b427d7e-8e8a-4486-831a-aa6cc98f1b39-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-bg8p2\" (UID: \"0b427d7e-8e8a-4486-831a-aa6cc98f1b39\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-bg8p2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.886087 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fd113660-b734-4d86-be8d-b28c5e9a328f-audit-dir\") pod \"apiserver-9ddfb9f55-xmvfk\" (UID: \"fd113660-b734-4d86-be8d-b28c5e9a328f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.886127 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c07a3946-e1f2-458f-bc29-15741de2605c-audit-policies\") pod \"apiserver-8596bd845d-tfhpf\" (UID: \"c07a3946-e1f2-458f-bc29-15741de2605c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-tfhpf" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.886158 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e36a1cae-0915-45b1-abf9-2f44c78f3306-serving-cert\") pod \"route-controller-manager-776cdc94d6-fzgnb\" (UID: \"e36a1cae-0915-45b1-abf9-2f44c78f3306\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fzgnb" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.886179 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e36a1cae-0915-45b1-abf9-2f44c78f3306-tmp\") pod \"route-controller-manager-776cdc94d6-fzgnb\" (UID: \"e36a1cae-0915-45b1-abf9-2f44c78f3306\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fzgnb" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.886201 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/007c14e3-9fa4-44aa-8d05-a57c4dc222a1-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-xw8v9\" (UID: \"007c14e3-9fa4-44aa-8d05-a57c4dc222a1\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-xw8v9" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.886224 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/dfeef834-363c-4dff-a170-acd203607c65-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-x2rhp\" (UID: \"dfeef834-363c-4dff-a170-acd203607c65\") " pod="openshift-machine-api/machine-api-operator-755bb95488-x2rhp" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.886250 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ba096274-efe0-462b-9a53-89e321166944-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-mngf2\" (UID: \"ba096274-efe0-462b-9a53-89e321166944\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-mngf2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.886279 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd113660-b734-4d86-be8d-b28c5e9a328f-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-xmvfk\" (UID: \"fd113660-b734-4d86-be8d-b28c5e9a328f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.886300 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c07a3946-e1f2-458f-bc29-15741de2605c-encryption-config\") pod \"apiserver-8596bd845d-tfhpf\" (UID: \"c07a3946-e1f2-458f-bc29-15741de2605c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-tfhpf" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.886328 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c07a3946-e1f2-458f-bc29-15741de2605c-audit-dir\") pod \"apiserver-8596bd845d-tfhpf\" (UID: \"c07a3946-e1f2-458f-bc29-15741de2605c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-tfhpf" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.886346 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0b427d7e-8e8a-4486-831a-aa6cc98f1b39-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-bg8p2\" (UID: \"0b427d7e-8e8a-4486-831a-aa6cc98f1b39\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-bg8p2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.886362 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ae478ef7-56ef-496c-b99c-4d952d5617b0-trusted-ca\") pod \"console-operator-67c89758df-6q5kp\" (UID: \"ae478ef7-56ef-496c-b99c-4d952d5617b0\") " pod="openshift-console-operator/console-operator-67c89758df-6q5kp" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.886379 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd113660-b734-4d86-be8d-b28c5e9a328f-serving-cert\") pod \"apiserver-9ddfb9f55-xmvfk\" (UID: \"fd113660-b734-4d86-be8d-b28c5e9a328f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.886395 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b427d7e-8e8a-4486-831a-aa6cc98f1b39-tmp\") pod \"cluster-image-registry-operator-86c45576b9-bg8p2\" (UID: \"0b427d7e-8e8a-4486-831a-aa6cc98f1b39\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-bg8p2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.886432 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e36a1cae-0915-45b1-abf9-2f44c78f3306-client-ca\") pod \"route-controller-manager-776cdc94d6-fzgnb\" (UID: \"e36a1cae-0915-45b1-abf9-2f44c78f3306\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fzgnb" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.886519 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fd113660-b734-4d86-be8d-b28c5e9a328f-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-xmvfk\" (UID: \"fd113660-b734-4d86-be8d-b28c5e9a328f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.886571 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c07a3946-e1f2-458f-bc29-15741de2605c-etcd-client\") pod \"apiserver-8596bd845d-tfhpf\" (UID: \"c07a3946-e1f2-458f-bc29-15741de2605c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-tfhpf" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.886608 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfeef834-363c-4dff-a170-acd203607c65-config\") pod \"machine-api-operator-755bb95488-x2rhp\" (UID: \"dfeef834-363c-4dff-a170-acd203607c65\") " pod="openshift-machine-api/machine-api-operator-755bb95488-x2rhp" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.886656 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e2d50ff8-e389-4ca8-8a4f-6987db07ea3b-auth-proxy-config\") pod \"machine-approver-54c688565-ll2j2\" (UID: \"e2d50ff8-e389-4ca8-8a4f-6987db07ea3b\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-ll2j2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.886686 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd113660-b734-4d86-be8d-b28c5e9a328f-config\") pod \"apiserver-9ddfb9f55-xmvfk\" (UID: \"fd113660-b734-4d86-be8d-b28c5e9a328f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.886702 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c07a3946-e1f2-458f-bc29-15741de2605c-serving-cert\") pod \"apiserver-8596bd845d-tfhpf\" (UID: \"c07a3946-e1f2-458f-bc29-15741de2605c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-tfhpf" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.886721 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae478ef7-56ef-496c-b99c-4d952d5617b0-config\") pod \"console-operator-67c89758df-6q5kp\" (UID: \"ae478ef7-56ef-496c-b99c-4d952d5617b0\") " pod="openshift-console-operator/console-operator-67c89758df-6q5kp" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.886743 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba096274-efe0-462b-9a53-89e321166944-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-mngf2\" (UID: \"ba096274-efe0-462b-9a53-89e321166944\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-mngf2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.886759 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0b427d7e-8e8a-4486-831a-aa6cc98f1b39-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-bg8p2\" (UID: \"0b427d7e-8e8a-4486-831a-aa6cc98f1b39\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-bg8p2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.886807 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/0b427d7e-8e8a-4486-831a-aa6cc98f1b39-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-bg8p2\" (UID: \"0b427d7e-8e8a-4486-831a-aa6cc98f1b39\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-bg8p2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.886823 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae478ef7-56ef-496c-b99c-4d952d5617b0-serving-cert\") pod \"console-operator-67c89758df-6q5kp\" (UID: \"ae478ef7-56ef-496c-b99c-4d952d5617b0\") " pod="openshift-console-operator/console-operator-67c89758df-6q5kp" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.886841 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwltw\" (UniqueName: \"kubernetes.io/projected/ae478ef7-56ef-496c-b99c-4d952d5617b0-kube-api-access-kwltw\") pod \"console-operator-67c89758df-6q5kp\" (UID: \"ae478ef7-56ef-496c-b99c-4d952d5617b0\") " pod="openshift-console-operator/console-operator-67c89758df-6q5kp" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.886858 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c07a3946-e1f2-458f-bc29-15741de2605c-etcd-serving-ca\") pod \"apiserver-8596bd845d-tfhpf\" (UID: \"c07a3946-e1f2-458f-bc29-15741de2605c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-tfhpf" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.886874 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67mcj\" (UniqueName: \"kubernetes.io/projected/c07a3946-e1f2-458f-bc29-15741de2605c-kube-api-access-67mcj\") pod \"apiserver-8596bd845d-tfhpf\" (UID: \"c07a3946-e1f2-458f-bc29-15741de2605c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-tfhpf" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.886889 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/007c14e3-9fa4-44aa-8d05-a57c4dc222a1-tmp\") pod \"controller-manager-65b6cccf98-xw8v9\" (UID: \"007c14e3-9fa4-44aa-8d05-a57c4dc222a1\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-xw8v9" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.886922 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjndr\" (UniqueName: \"kubernetes.io/projected/e36a1cae-0915-45b1-abf9-2f44c78f3306-kube-api-access-wjndr\") pod \"route-controller-manager-776cdc94d6-fzgnb\" (UID: \"e36a1cae-0915-45b1-abf9-2f44c78f3306\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fzgnb" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.886942 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfx8n\" (UniqueName: \"kubernetes.io/projected/ba096274-efe0-462b-9a53-89e321166944-kube-api-access-dfx8n\") pod \"openshift-controller-manager-operator-686468bdd5-mngf2\" (UID: \"ba096274-efe0-462b-9a53-89e321166944\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-mngf2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.887015 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87vvb\" (UniqueName: \"kubernetes.io/projected/e2d50ff8-e389-4ca8-8a4f-6987db07ea3b-kube-api-access-87vvb\") pod \"machine-approver-54c688565-ll2j2\" (UID: \"e2d50ff8-e389-4ca8-8a4f-6987db07ea3b\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-ll2j2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.887041 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dfeef834-363c-4dff-a170-acd203607c65-images\") pod \"machine-api-operator-755bb95488-x2rhp\" (UID: \"dfeef834-363c-4dff-a170-acd203607c65\") " pod="openshift-machine-api/machine-api-operator-755bb95488-x2rhp" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.887073 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/fd113660-b734-4d86-be8d-b28c5e9a328f-audit\") pod \"apiserver-9ddfb9f55-xmvfk\" (UID: \"fd113660-b734-4d86-be8d-b28c5e9a328f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.887097 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fd113660-b734-4d86-be8d-b28c5e9a328f-etcd-client\") pod \"apiserver-9ddfb9f55-xmvfk\" (UID: \"fd113660-b734-4d86-be8d-b28c5e9a328f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.887112 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c07a3946-e1f2-458f-bc29-15741de2605c-trusted-ca-bundle\") pod \"apiserver-8596bd845d-tfhpf\" (UID: \"c07a3946-e1f2-458f-bc29-15741de2605c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-tfhpf" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.887128 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzcsr\" (UniqueName: \"kubernetes.io/projected/007c14e3-9fa4-44aa-8d05-a57c4dc222a1-kube-api-access-kzcsr\") pod \"controller-manager-65b6cccf98-xw8v9\" (UID: \"007c14e3-9fa4-44aa-8d05-a57c4dc222a1\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-xw8v9" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.887151 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5f67b\" (UniqueName: \"kubernetes.io/projected/0b427d7e-8e8a-4486-831a-aa6cc98f1b39-kube-api-access-5f67b\") pod \"cluster-image-registry-operator-86c45576b9-bg8p2\" (UID: \"0b427d7e-8e8a-4486-831a-aa6cc98f1b39\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-bg8p2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.887051 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-6q7wl"] Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.887169 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/007c14e3-9fa4-44aa-8d05-a57c4dc222a1-config\") pod \"controller-manager-65b6cccf98-xw8v9\" (UID: \"007c14e3-9fa4-44aa-8d05-a57c4dc222a1\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-xw8v9" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.887192 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e2d50ff8-e389-4ca8-8a4f-6987db07ea3b-machine-approver-tls\") pod \"machine-approver-54c688565-ll2j2\" (UID: \"e2d50ff8-e389-4ca8-8a4f-6987db07ea3b\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-ll2j2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.887212 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9h5dr\" (UniqueName: \"kubernetes.io/projected/eb2cf2b6-ca7b-4f75-ad62-7bb5e85aeea9-kube-api-access-9h5dr\") pod \"cluster-samples-operator-6b564684c8-7smqb\" (UID: \"eb2cf2b6-ca7b-4f75-ad62-7bb5e85aeea9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-7smqb" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.887237 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/007c14e3-9fa4-44aa-8d05-a57c4dc222a1-serving-cert\") pod \"controller-manager-65b6cccf98-xw8v9\" (UID: \"007c14e3-9fa4-44aa-8d05-a57c4dc222a1\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-xw8v9" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.887258 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e36a1cae-0915-45b1-abf9-2f44c78f3306-config\") pod \"route-controller-manager-776cdc94d6-fzgnb\" (UID: \"e36a1cae-0915-45b1-abf9-2f44c78f3306\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fzgnb" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.887273 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54jwq\" (UniqueName: \"kubernetes.io/projected/fd113660-b734-4d86-be8d-b28c5e9a328f-kube-api-access-54jwq\") pod \"apiserver-9ddfb9f55-xmvfk\" (UID: \"fd113660-b734-4d86-be8d-b28c5e9a328f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.887291 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2d50ff8-e389-4ca8-8a4f-6987db07ea3b-config\") pod \"machine-approver-54c688565-ll2j2\" (UID: \"e2d50ff8-e389-4ca8-8a4f-6987db07ea3b\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-ll2j2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.887310 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/007c14e3-9fa4-44aa-8d05-a57c4dc222a1-client-ca\") pod \"controller-manager-65b6cccf98-xw8v9\" (UID: \"007c14e3-9fa4-44aa-8d05-a57c4dc222a1\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-xw8v9" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.887325 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/fd113660-b734-4d86-be8d-b28c5e9a328f-node-pullsecrets\") pod \"apiserver-9ddfb9f55-xmvfk\" (UID: \"fd113660-b734-4d86-be8d-b28c5e9a328f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.887347 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/eb2cf2b6-ca7b-4f75-ad62-7bb5e85aeea9-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-7smqb\" (UID: \"eb2cf2b6-ca7b-4f75-ad62-7bb5e85aeea9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-7smqb" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.887362 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fd113660-b734-4d86-be8d-b28c5e9a328f-encryption-config\") pod \"apiserver-9ddfb9f55-xmvfk\" (UID: \"fd113660-b734-4d86-be8d-b28c5e9a328f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.887699 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-p98m2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.892590 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-25dsq"] Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.892804 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-6q7wl" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.897627 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-v89hk" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.903522 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.906278 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.906349 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.906533 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.906709 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.906757 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.906861 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.906913 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.907536 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.908264 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-49gkx"] Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.909114 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.910863 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.911329 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.911808 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.912682 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.912915 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.913013 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.913063 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.924627 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.924848 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.925451 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.925712 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.925820 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.925914 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.926016 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.926098 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.929397 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.929684 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.935291 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.935792 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.936061 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.937390 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.937493 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-r4999"] Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.938642 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.938994 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.939095 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.939165 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.939033 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.939342 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.947431 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.947753 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-7nx8w"] Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.947976 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.966340 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.967067 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.967263 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.967774 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.969821 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.970838 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-nzfjl"] Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.970941 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-7nx8w" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.975217 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-r4999" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.977082 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-nzfjl" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.977500 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-8p7x7"] Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.991123 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e2d50ff8-e389-4ca8-8a4f-6987db07ea3b-auth-proxy-config\") pod \"machine-approver-54c688565-ll2j2\" (UID: \"e2d50ff8-e389-4ca8-8a4f-6987db07ea3b\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-ll2j2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.991167 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd113660-b734-4d86-be8d-b28c5e9a328f-config\") pod \"apiserver-9ddfb9f55-xmvfk\" (UID: \"fd113660-b734-4d86-be8d-b28c5e9a328f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.991192 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c07a3946-e1f2-458f-bc29-15741de2605c-serving-cert\") pod \"apiserver-8596bd845d-tfhpf\" (UID: \"c07a3946-e1f2-458f-bc29-15741de2605c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-tfhpf" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.991216 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae478ef7-56ef-496c-b99c-4d952d5617b0-config\") pod \"console-operator-67c89758df-6q5kp\" (UID: \"ae478ef7-56ef-496c-b99c-4d952d5617b0\") " pod="openshift-console-operator/console-operator-67c89758df-6q5kp" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.991240 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9af7812b-a785-44ec-a8eb-eb72b9958b01-serving-cert\") pod \"authentication-operator-7f5c659b84-v89hk\" (UID: \"9af7812b-a785-44ec-a8eb-eb72b9958b01\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-v89hk" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.991264 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba096274-efe0-462b-9a53-89e321166944-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-mngf2\" (UID: \"ba096274-efe0-462b-9a53-89e321166944\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-mngf2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.991284 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0b427d7e-8e8a-4486-831a-aa6cc98f1b39-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-bg8p2\" (UID: \"0b427d7e-8e8a-4486-831a-aa6cc98f1b39\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-bg8p2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.991317 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ld95q\" (UniqueName: \"kubernetes.io/projected/ea345128-daaf-464a-b774-8f8cf4c34aa5-kube-api-access-ld95q\") pod \"openshift-config-operator-5777786469-rkbh2\" (UID: \"ea345128-daaf-464a-b774-8f8cf4c34aa5\") " pod="openshift-config-operator/openshift-config-operator-5777786469-rkbh2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.991340 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.991377 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/0b427d7e-8e8a-4486-831a-aa6cc98f1b39-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-bg8p2\" (UID: \"0b427d7e-8e8a-4486-831a-aa6cc98f1b39\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-bg8p2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.991403 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae478ef7-56ef-496c-b99c-4d952d5617b0-serving-cert\") pod \"console-operator-67c89758df-6q5kp\" (UID: \"ae478ef7-56ef-496c-b99c-4d952d5617b0\") " pod="openshift-console-operator/console-operator-67c89758df-6q5kp" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.991423 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kwltw\" (UniqueName: \"kubernetes.io/projected/ae478ef7-56ef-496c-b99c-4d952d5617b0-kube-api-access-kwltw\") pod \"console-operator-67c89758df-6q5kp\" (UID: \"ae478ef7-56ef-496c-b99c-4d952d5617b0\") " pod="openshift-console-operator/console-operator-67c89758df-6q5kp" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.991444 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c07a3946-e1f2-458f-bc29-15741de2605c-etcd-serving-ca\") pod \"apiserver-8596bd845d-tfhpf\" (UID: \"c07a3946-e1f2-458f-bc29-15741de2605c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-tfhpf" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.991480 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-67mcj\" (UniqueName: \"kubernetes.io/projected/c07a3946-e1f2-458f-bc29-15741de2605c-kube-api-access-67mcj\") pod \"apiserver-8596bd845d-tfhpf\" (UID: \"c07a3946-e1f2-458f-bc29-15741de2605c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-tfhpf" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.991500 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/007c14e3-9fa4-44aa-8d05-a57c4dc222a1-tmp\") pod \"controller-manager-65b6cccf98-xw8v9\" (UID: \"007c14e3-9fa4-44aa-8d05-a57c4dc222a1\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-xw8v9" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.991525 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65e3321-2af5-4ab7-8765-36af9f3ecc9e-serving-cert\") pod \"etcd-operator-69b85846b6-r4999\" (UID: \"f65e3321-2af5-4ab7-8765-36af9f3ecc9e\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r4999" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.991547 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bebd6777-9b90-4b62-a3a9-360290cb39a9-audit-policies\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.991568 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/699a5d41-d0b5-4d88-9448-4b3bad2cc424-metrics-tls\") pod \"dns-operator-799b87ffcd-p98m2\" (UID: \"699a5d41-d0b5-4d88-9448-4b3bad2cc424\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-p98m2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.991584 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea345128-daaf-464a-b774-8f8cf4c34aa5-serving-cert\") pod \"openshift-config-operator-5777786469-rkbh2\" (UID: \"ea345128-daaf-464a-b774-8f8cf4c34aa5\") " pod="openshift-config-operator/openshift-config-operator-5777786469-rkbh2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.991603 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.991641 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wjndr\" (UniqueName: \"kubernetes.io/projected/e36a1cae-0915-45b1-abf9-2f44c78f3306-kube-api-access-wjndr\") pod \"route-controller-manager-776cdc94d6-fzgnb\" (UID: \"e36a1cae-0915-45b1-abf9-2f44c78f3306\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fzgnb" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.991661 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dfx8n\" (UniqueName: \"kubernetes.io/projected/ba096274-efe0-462b-9a53-89e321166944-kube-api-access-dfx8n\") pod \"openshift-controller-manager-operator-686468bdd5-mngf2\" (UID: \"ba096274-efe0-462b-9a53-89e321166944\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-mngf2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.991684 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/699a5d41-d0b5-4d88-9448-4b3bad2cc424-tmp-dir\") pod \"dns-operator-799b87ffcd-p98m2\" (UID: \"699a5d41-d0b5-4d88-9448-4b3bad2cc424\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-p98m2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.991708 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.991743 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-87vvb\" (UniqueName: \"kubernetes.io/projected/e2d50ff8-e389-4ca8-8a4f-6987db07ea3b-kube-api-access-87vvb\") pod \"machine-approver-54c688565-ll2j2\" (UID: \"e2d50ff8-e389-4ca8-8a4f-6987db07ea3b\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-ll2j2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.991769 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dfeef834-363c-4dff-a170-acd203607c65-images\") pod \"machine-api-operator-755bb95488-x2rhp\" (UID: \"dfeef834-363c-4dff-a170-acd203607c65\") " pod="openshift-machine-api/machine-api-operator-755bb95488-x2rhp" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.991791 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/fd113660-b734-4d86-be8d-b28c5e9a328f-audit\") pod \"apiserver-9ddfb9f55-xmvfk\" (UID: \"fd113660-b734-4d86-be8d-b28c5e9a328f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.991818 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfb4z\" (UniqueName: \"kubernetes.io/projected/a1372d1c-9557-4da9-b571-ea78602f491f-kube-api-access-mfb4z\") pod \"downloads-747b44746d-btnnz\" (UID: \"a1372d1c-9557-4da9-b571-ea78602f491f\") " pod="openshift-console/downloads-747b44746d-btnnz" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.991853 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fd113660-b734-4d86-be8d-b28c5e9a328f-etcd-client\") pod \"apiserver-9ddfb9f55-xmvfk\" (UID: \"fd113660-b734-4d86-be8d-b28c5e9a328f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.991875 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c07a3946-e1f2-458f-bc29-15741de2605c-trusted-ca-bundle\") pod \"apiserver-8596bd845d-tfhpf\" (UID: \"c07a3946-e1f2-458f-bc29-15741de2605c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-tfhpf" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.991896 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kzcsr\" (UniqueName: \"kubernetes.io/projected/007c14e3-9fa4-44aa-8d05-a57c4dc222a1-kube-api-access-kzcsr\") pod \"controller-manager-65b6cccf98-xw8v9\" (UID: \"007c14e3-9fa4-44aa-8d05-a57c4dc222a1\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-xw8v9" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.991941 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f65e3321-2af5-4ab7-8765-36af9f3ecc9e-etcd-client\") pod \"etcd-operator-69b85846b6-r4999\" (UID: \"f65e3321-2af5-4ab7-8765-36af9f3ecc9e\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r4999" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.991981 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5f67b\" (UniqueName: \"kubernetes.io/projected/0b427d7e-8e8a-4486-831a-aa6cc98f1b39-kube-api-access-5f67b\") pod \"cluster-image-registry-operator-86c45576b9-bg8p2\" (UID: \"0b427d7e-8e8a-4486-831a-aa6cc98f1b39\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-bg8p2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992002 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/007c14e3-9fa4-44aa-8d05-a57c4dc222a1-config\") pod \"controller-manager-65b6cccf98-xw8v9\" (UID: \"007c14e3-9fa4-44aa-8d05-a57c4dc222a1\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-xw8v9" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992019 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26kbp\" (UniqueName: \"kubernetes.io/projected/9af7812b-a785-44ec-a8eb-eb72b9958b01-kube-api-access-26kbp\") pod \"authentication-operator-7f5c659b84-v89hk\" (UID: \"9af7812b-a785-44ec-a8eb-eb72b9958b01\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-v89hk" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992040 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e2d50ff8-e389-4ca8-8a4f-6987db07ea3b-machine-approver-tls\") pod \"machine-approver-54c688565-ll2j2\" (UID: \"e2d50ff8-e389-4ca8-8a4f-6987db07ea3b\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-ll2j2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992065 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9h5dr\" (UniqueName: \"kubernetes.io/projected/eb2cf2b6-ca7b-4f75-ad62-7bb5e85aeea9-kube-api-access-9h5dr\") pod \"cluster-samples-operator-6b564684c8-7smqb\" (UID: \"eb2cf2b6-ca7b-4f75-ad62-7bb5e85aeea9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-7smqb" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992086 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/007c14e3-9fa4-44aa-8d05-a57c4dc222a1-serving-cert\") pod \"controller-manager-65b6cccf98-xw8v9\" (UID: \"007c14e3-9fa4-44aa-8d05-a57c4dc222a1\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-xw8v9" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992110 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e36a1cae-0915-45b1-abf9-2f44c78f3306-config\") pod \"route-controller-manager-776cdc94d6-fzgnb\" (UID: \"e36a1cae-0915-45b1-abf9-2f44c78f3306\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fzgnb" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992130 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-54jwq\" (UniqueName: \"kubernetes.io/projected/fd113660-b734-4d86-be8d-b28c5e9a328f-kube-api-access-54jwq\") pod \"apiserver-9ddfb9f55-xmvfk\" (UID: \"fd113660-b734-4d86-be8d-b28c5e9a328f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992152 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42d89f76-66b8-4ffa-a63e-13582811b819-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-6q7wl\" (UID: \"42d89f76-66b8-4ffa-a63e-13582811b819\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-6q7wl" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992178 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992198 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992219 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgrjt\" (UniqueName: \"kubernetes.io/projected/bebd6777-9b90-4b62-a3a9-360290cb39a9-kube-api-access-dgrjt\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992222 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae478ef7-56ef-496c-b99c-4d952d5617b0-config\") pod \"console-operator-67c89758df-6q5kp\" (UID: \"ae478ef7-56ef-496c-b99c-4d952d5617b0\") " pod="openshift-console-operator/console-operator-67c89758df-6q5kp" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992238 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2d50ff8-e389-4ca8-8a4f-6987db07ea3b-config\") pod \"machine-approver-54c688565-ll2j2\" (UID: \"e2d50ff8-e389-4ca8-8a4f-6987db07ea3b\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-ll2j2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992258 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/007c14e3-9fa4-44aa-8d05-a57c4dc222a1-client-ca\") pod \"controller-manager-65b6cccf98-xw8v9\" (UID: \"007c14e3-9fa4-44aa-8d05-a57c4dc222a1\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-xw8v9" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992280 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992301 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/fd113660-b734-4d86-be8d-b28c5e9a328f-node-pullsecrets\") pod \"apiserver-9ddfb9f55-xmvfk\" (UID: \"fd113660-b734-4d86-be8d-b28c5e9a328f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992327 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/eb2cf2b6-ca7b-4f75-ad62-7bb5e85aeea9-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-7smqb\" (UID: \"eb2cf2b6-ca7b-4f75-ad62-7bb5e85aeea9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-7smqb" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992345 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fd113660-b734-4d86-be8d-b28c5e9a328f-encryption-config\") pod \"apiserver-9ddfb9f55-xmvfk\" (UID: \"fd113660-b734-4d86-be8d-b28c5e9a328f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992365 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/f65e3321-2af5-4ab7-8765-36af9f3ecc9e-etcd-service-ca\") pod \"etcd-operator-69b85846b6-r4999\" (UID: \"f65e3321-2af5-4ab7-8765-36af9f3ecc9e\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r4999" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992392 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8qnw8\" (UniqueName: \"kubernetes.io/projected/dfeef834-363c-4dff-a170-acd203607c65-kube-api-access-8qnw8\") pod \"machine-api-operator-755bb95488-x2rhp\" (UID: \"dfeef834-363c-4dff-a170-acd203607c65\") " pod="openshift-machine-api/machine-api-operator-755bb95488-x2rhp" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992416 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/fd113660-b734-4d86-be8d-b28c5e9a328f-image-import-ca\") pod \"apiserver-9ddfb9f55-xmvfk\" (UID: \"fd113660-b734-4d86-be8d-b28c5e9a328f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992437 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba096274-efe0-462b-9a53-89e321166944-config\") pod \"openshift-controller-manager-operator-686468bdd5-mngf2\" (UID: \"ba096274-efe0-462b-9a53-89e321166944\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-mngf2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992467 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/0b427d7e-8e8a-4486-831a-aa6cc98f1b39-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-bg8p2\" (UID: \"0b427d7e-8e8a-4486-831a-aa6cc98f1b39\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-bg8p2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992490 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992513 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fd113660-b734-4d86-be8d-b28c5e9a328f-audit-dir\") pod \"apiserver-9ddfb9f55-xmvfk\" (UID: \"fd113660-b734-4d86-be8d-b28c5e9a328f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992535 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c07a3946-e1f2-458f-bc29-15741de2605c-audit-policies\") pod \"apiserver-8596bd845d-tfhpf\" (UID: \"c07a3946-e1f2-458f-bc29-15741de2605c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-tfhpf" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992559 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65e3321-2af5-4ab7-8765-36af9f3ecc9e-tmp-dir\") pod \"etcd-operator-69b85846b6-r4999\" (UID: \"f65e3321-2af5-4ab7-8765-36af9f3ecc9e\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r4999" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992581 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e36a1cae-0915-45b1-abf9-2f44c78f3306-serving-cert\") pod \"route-controller-manager-776cdc94d6-fzgnb\" (UID: \"e36a1cae-0915-45b1-abf9-2f44c78f3306\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fzgnb" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992602 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e36a1cae-0915-45b1-abf9-2f44c78f3306-tmp\") pod \"route-controller-manager-776cdc94d6-fzgnb\" (UID: \"e36a1cae-0915-45b1-abf9-2f44c78f3306\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fzgnb" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992622 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/007c14e3-9fa4-44aa-8d05-a57c4dc222a1-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-xw8v9\" (UID: \"007c14e3-9fa4-44aa-8d05-a57c4dc222a1\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-xw8v9" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992643 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/dfeef834-363c-4dff-a170-acd203607c65-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-x2rhp\" (UID: \"dfeef834-363c-4dff-a170-acd203607c65\") " pod="openshift-machine-api/machine-api-operator-755bb95488-x2rhp" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992667 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ba096274-efe0-462b-9a53-89e321166944-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-mngf2\" (UID: \"ba096274-efe0-462b-9a53-89e321166944\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-mngf2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992748 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kw26\" (UniqueName: \"kubernetes.io/projected/f65e3321-2af5-4ab7-8765-36af9f3ecc9e-kube-api-access-2kw26\") pod \"etcd-operator-69b85846b6-r4999\" (UID: \"f65e3321-2af5-4ab7-8765-36af9f3ecc9e\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r4999" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992750 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e2d50ff8-e389-4ca8-8a4f-6987db07ea3b-auth-proxy-config\") pod \"machine-approver-54c688565-ll2j2\" (UID: \"e2d50ff8-e389-4ca8-8a4f-6987db07ea3b\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-ll2j2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992771 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42d89f76-66b8-4ffa-a63e-13582811b819-config\") pod \"openshift-apiserver-operator-846cbfc458-6q7wl\" (UID: \"42d89f76-66b8-4ffa-a63e-13582811b819\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-6q7wl" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992795 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9af7812b-a785-44ec-a8eb-eb72b9958b01-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-v89hk\" (UID: \"9af7812b-a785-44ec-a8eb-eb72b9958b01\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-v89hk" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992844 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992875 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9af7812b-a785-44ec-a8eb-eb72b9958b01-config\") pod \"authentication-operator-7f5c659b84-v89hk\" (UID: \"9af7812b-a785-44ec-a8eb-eb72b9958b01\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-v89hk" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992909 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd113660-b734-4d86-be8d-b28c5e9a328f-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-xmvfk\" (UID: \"fd113660-b734-4d86-be8d-b28c5e9a328f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992929 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c07a3946-e1f2-458f-bc29-15741de2605c-encryption-config\") pod \"apiserver-8596bd845d-tfhpf\" (UID: \"c07a3946-e1f2-458f-bc29-15741de2605c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-tfhpf" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992950 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c07a3946-e1f2-458f-bc29-15741de2605c-audit-dir\") pod \"apiserver-8596bd845d-tfhpf\" (UID: \"c07a3946-e1f2-458f-bc29-15741de2605c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-tfhpf" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.994011 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0b427d7e-8e8a-4486-831a-aa6cc98f1b39-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-bg8p2\" (UID: \"0b427d7e-8e8a-4486-831a-aa6cc98f1b39\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-bg8p2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.994038 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ae478ef7-56ef-496c-b99c-4d952d5617b0-trusted-ca\") pod \"console-operator-67c89758df-6q5kp\" (UID: \"ae478ef7-56ef-496c-b99c-4d952d5617b0\") " pod="openshift-console-operator/console-operator-67c89758df-6q5kp" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.994062 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.994102 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd113660-b734-4d86-be8d-b28c5e9a328f-serving-cert\") pod \"apiserver-9ddfb9f55-xmvfk\" (UID: \"fd113660-b734-4d86-be8d-b28c5e9a328f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.994130 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b427d7e-8e8a-4486-831a-aa6cc98f1b39-tmp\") pod \"cluster-image-registry-operator-86c45576b9-bg8p2\" (UID: \"0b427d7e-8e8a-4486-831a-aa6cc98f1b39\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-bg8p2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.994153 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/ea345128-daaf-464a-b774-8f8cf4c34aa5-available-featuregates\") pod \"openshift-config-operator-5777786469-rkbh2\" (UID: \"ea345128-daaf-464a-b774-8f8cf4c34aa5\") " pod="openshift-config-operator/openshift-config-operator-5777786469-rkbh2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.994176 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.994211 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9af7812b-a785-44ec-a8eb-eb72b9958b01-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-v89hk\" (UID: \"9af7812b-a785-44ec-a8eb-eb72b9958b01\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-v89hk" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.994472 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e36a1cae-0915-45b1-abf9-2f44c78f3306-client-ca\") pod \"route-controller-manager-776cdc94d6-fzgnb\" (UID: \"e36a1cae-0915-45b1-abf9-2f44c78f3306\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fzgnb" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.994500 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/f65e3321-2af5-4ab7-8765-36af9f3ecc9e-etcd-ca\") pod \"etcd-operator-69b85846b6-r4999\" (UID: \"f65e3321-2af5-4ab7-8765-36af9f3ecc9e\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r4999" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.994523 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rdkp\" (UniqueName: \"kubernetes.io/projected/699a5d41-d0b5-4d88-9448-4b3bad2cc424-kube-api-access-5rdkp\") pod \"dns-operator-799b87ffcd-p98m2\" (UID: \"699a5d41-d0b5-4d88-9448-4b3bad2cc424\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-p98m2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.994832 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bebd6777-9b90-4b62-a3a9-360290cb39a9-audit-dir\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.994872 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fd113660-b734-4d86-be8d-b28c5e9a328f-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-xmvfk\" (UID: \"fd113660-b734-4d86-be8d-b28c5e9a328f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.994901 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c07a3946-e1f2-458f-bc29-15741de2605c-etcd-client\") pod \"apiserver-8596bd845d-tfhpf\" (UID: \"c07a3946-e1f2-458f-bc29-15741de2605c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-tfhpf" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.994923 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65e3321-2af5-4ab7-8765-36af9f3ecc9e-config\") pod \"etcd-operator-69b85846b6-r4999\" (UID: \"f65e3321-2af5-4ab7-8765-36af9f3ecc9e\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r4999" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.994947 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9g2hf\" (UniqueName: \"kubernetes.io/projected/42d89f76-66b8-4ffa-a63e-13582811b819-kube-api-access-9g2hf\") pod \"openshift-apiserver-operator-846cbfc458-6q7wl\" (UID: \"42d89f76-66b8-4ffa-a63e-13582811b819\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-6q7wl" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.995041 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.995076 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfeef834-363c-4dff-a170-acd203607c65-config\") pod \"machine-api-operator-755bb95488-x2rhp\" (UID: \"dfeef834-363c-4dff-a170-acd203607c65\") " pod="openshift-machine-api/machine-api-operator-755bb95488-x2rhp" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.995773 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfeef834-363c-4dff-a170-acd203607c65-config\") pod \"machine-api-operator-755bb95488-x2rhp\" (UID: \"dfeef834-363c-4dff-a170-acd203607c65\") " pod="openshift-machine-api/machine-api-operator-755bb95488-x2rhp" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.996676 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e36a1cae-0915-45b1-abf9-2f44c78f3306-config\") pod \"route-controller-manager-776cdc94d6-fzgnb\" (UID: \"e36a1cae-0915-45b1-abf9-2f44c78f3306\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fzgnb" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.997684 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2d50ff8-e389-4ca8-8a4f-6987db07ea3b-config\") pod \"machine-approver-54c688565-ll2j2\" (UID: \"e2d50ff8-e389-4ca8-8a4f-6987db07ea3b\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-ll2j2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.998133 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/fd113660-b734-4d86-be8d-b28c5e9a328f-node-pullsecrets\") pod \"apiserver-9ddfb9f55-xmvfk\" (UID: \"fd113660-b734-4d86-be8d-b28c5e9a328f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.998585 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c07a3946-e1f2-458f-bc29-15741de2605c-trusted-ca-bundle\") pod \"apiserver-8596bd845d-tfhpf\" (UID: \"c07a3946-e1f2-458f-bc29-15741de2605c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-tfhpf" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:56.999870 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/0b427d7e-8e8a-4486-831a-aa6cc98f1b39-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-bg8p2\" (UID: \"0b427d7e-8e8a-4486-831a-aa6cc98f1b39\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-bg8p2" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.001258 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/007c14e3-9fa4-44aa-8d05-a57c4dc222a1-client-ca\") pod \"controller-manager-65b6cccf98-xw8v9\" (UID: \"007c14e3-9fa4-44aa-8d05-a57c4dc222a1\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-xw8v9" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.001310 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd113660-b734-4d86-be8d-b28c5e9a328f-config\") pod \"apiserver-9ddfb9f55-xmvfk\" (UID: \"fd113660-b734-4d86-be8d-b28c5e9a328f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.001823 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/fd113660-b734-4d86-be8d-b28c5e9a328f-audit\") pod \"apiserver-9ddfb9f55-xmvfk\" (UID: \"fd113660-b734-4d86-be8d-b28c5e9a328f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.002282 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/007c14e3-9fa4-44aa-8d05-a57c4dc222a1-config\") pod \"controller-manager-65b6cccf98-xw8v9\" (UID: \"007c14e3-9fa4-44aa-8d05-a57c4dc222a1\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-xw8v9" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.003037 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba096274-efe0-462b-9a53-89e321166944-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-mngf2\" (UID: \"ba096274-efe0-462b-9a53-89e321166944\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-mngf2" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.003433 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e2d50ff8-e389-4ca8-8a4f-6987db07ea3b-machine-approver-tls\") pod \"machine-approver-54c688565-ll2j2\" (UID: \"e2d50ff8-e389-4ca8-8a4f-6987db07ea3b\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-ll2j2" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.003714 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/007c14e3-9fa4-44aa-8d05-a57c4dc222a1-tmp\") pod \"controller-manager-65b6cccf98-xw8v9\" (UID: \"007c14e3-9fa4-44aa-8d05-a57c4dc222a1\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-xw8v9" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.004228 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fd113660-b734-4d86-be8d-b28c5e9a328f-audit-dir\") pod \"apiserver-9ddfb9f55-xmvfk\" (UID: \"fd113660-b734-4d86-be8d-b28c5e9a328f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.005128 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/fd113660-b734-4d86-be8d-b28c5e9a328f-image-import-ca\") pod \"apiserver-9ddfb9f55-xmvfk\" (UID: \"fd113660-b734-4d86-be8d-b28c5e9a328f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.005132 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dfeef834-363c-4dff-a170-acd203607c65-images\") pod \"machine-api-operator-755bb95488-x2rhp\" (UID: \"dfeef834-363c-4dff-a170-acd203607c65\") " pod="openshift-machine-api/machine-api-operator-755bb95488-x2rhp" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.006198 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba096274-efe0-462b-9a53-89e321166944-config\") pod \"openshift-controller-manager-operator-686468bdd5-mngf2\" (UID: \"ba096274-efe0-462b-9a53-89e321166944\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-mngf2" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.006756 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fd113660-b734-4d86-be8d-b28c5e9a328f-etcd-client\") pod \"apiserver-9ddfb9f55-xmvfk\" (UID: \"fd113660-b734-4d86-be8d-b28c5e9a328f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.008577 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fd113660-b734-4d86-be8d-b28c5e9a328f-encryption-config\") pod \"apiserver-9ddfb9f55-xmvfk\" (UID: \"fd113660-b734-4d86-be8d-b28c5e9a328f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.009090 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e36a1cae-0915-45b1-abf9-2f44c78f3306-serving-cert\") pod \"route-controller-manager-776cdc94d6-fzgnb\" (UID: \"e36a1cae-0915-45b1-abf9-2f44c78f3306\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fzgnb" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.011629 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0b427d7e-8e8a-4486-831a-aa6cc98f1b39-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-bg8p2\" (UID: \"0b427d7e-8e8a-4486-831a-aa6cc98f1b39\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-bg8p2" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.012107 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd113660-b734-4d86-be8d-b28c5e9a328f-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-xmvfk\" (UID: \"fd113660-b734-4d86-be8d-b28c5e9a328f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.012586 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c07a3946-e1f2-458f-bc29-15741de2605c-etcd-serving-ca\") pod \"apiserver-8596bd845d-tfhpf\" (UID: \"c07a3946-e1f2-458f-bc29-15741de2605c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-tfhpf" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.013383 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b427d7e-8e8a-4486-831a-aa6cc98f1b39-tmp\") pod \"cluster-image-registry-operator-86c45576b9-bg8p2\" (UID: \"0b427d7e-8e8a-4486-831a-aa6cc98f1b39\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-bg8p2" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.013666 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c07a3946-e1f2-458f-bc29-15741de2605c-audit-dir\") pod \"apiserver-8596bd845d-tfhpf\" (UID: \"c07a3946-e1f2-458f-bc29-15741de2605c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-tfhpf" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.014019 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e36a1cae-0915-45b1-abf9-2f44c78f3306-client-ca\") pod \"route-controller-manager-776cdc94d6-fzgnb\" (UID: \"e36a1cae-0915-45b1-abf9-2f44c78f3306\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fzgnb" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.014138 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e36a1cae-0915-45b1-abf9-2f44c78f3306-tmp\") pod \"route-controller-manager-776cdc94d6-fzgnb\" (UID: \"e36a1cae-0915-45b1-abf9-2f44c78f3306\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fzgnb" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.014407 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ba096274-efe0-462b-9a53-89e321166944-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-mngf2\" (UID: \"ba096274-efe0-462b-9a53-89e321166944\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-mngf2" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.015137 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c07a3946-e1f2-458f-bc29-15741de2605c-serving-cert\") pod \"apiserver-8596bd845d-tfhpf\" (UID: \"c07a3946-e1f2-458f-bc29-15741de2605c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-tfhpf" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.015238 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae478ef7-56ef-496c-b99c-4d952d5617b0-serving-cert\") pod \"console-operator-67c89758df-6q5kp\" (UID: \"ae478ef7-56ef-496c-b99c-4d952d5617b0\") " pod="openshift-console-operator/console-operator-67c89758df-6q5kp" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.015515 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fd113660-b734-4d86-be8d-b28c5e9a328f-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-xmvfk\" (UID: \"fd113660-b734-4d86-be8d-b28c5e9a328f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.015539 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-j9r4l"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.015692 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd113660-b734-4d86-be8d-b28c5e9a328f-serving-cert\") pod \"apiserver-9ddfb9f55-xmvfk\" (UID: \"fd113660-b734-4d86-be8d-b28c5e9a328f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.015797 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-8p7x7" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.016042 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.016365 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ae478ef7-56ef-496c-b99c-4d952d5617b0-trusted-ca\") pod \"console-operator-67c89758df-6q5kp\" (UID: \"ae478ef7-56ef-496c-b99c-4d952d5617b0\") " pod="openshift-console-operator/console-operator-67c89758df-6q5kp" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.017623 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/007c14e3-9fa4-44aa-8d05-a57c4dc222a1-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-xw8v9\" (UID: \"007c14e3-9fa4-44aa-8d05-a57c4dc222a1\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-xw8v9" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.018482 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/0b427d7e-8e8a-4486-831a-aa6cc98f1b39-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-bg8p2\" (UID: \"0b427d7e-8e8a-4486-831a-aa6cc98f1b39\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-bg8p2" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.026045 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/dfeef834-363c-4dff-a170-acd203607c65-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-x2rhp\" (UID: \"dfeef834-363c-4dff-a170-acd203607c65\") " pod="openshift-machine-api/machine-api-operator-755bb95488-x2rhp" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.029165 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/console-64d44f6ddf-7q8jr"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.030076 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-j9r4l" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.030338 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c07a3946-e1f2-458f-bc29-15741de2605c-etcd-client\") pod \"apiserver-8596bd845d-tfhpf\" (UID: \"c07a3946-e1f2-458f-bc29-15741de2605c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-tfhpf" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.034516 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-dc6zt"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.034641 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c07a3946-e1f2-458f-bc29-15741de2605c-audit-policies\") pod \"apiserver-8596bd845d-tfhpf\" (UID: \"c07a3946-e1f2-458f-bc29-15741de2605c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-tfhpf" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.034731 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-7q8jr" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.035176 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.036060 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/007c14e3-9fa4-44aa-8d05-a57c4dc222a1-serving-cert\") pod \"controller-manager-65b6cccf98-xw8v9\" (UID: \"007c14e3-9fa4-44aa-8d05-a57c4dc222a1\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-xw8v9" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.039094 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/eb2cf2b6-ca7b-4f75-ad62-7bb5e85aeea9-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-7smqb\" (UID: \"eb2cf2b6-ca7b-4f75-ad62-7bb5e85aeea9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-7smqb" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.039620 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-68cf44c8b8-7x2rm"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.039966 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-dc6zt" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.045391 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c07a3946-e1f2-458f-bc29-15741de2605c-encryption-config\") pod \"apiserver-8596bd845d-tfhpf\" (UID: \"c07a3946-e1f2-458f-bc29-15741de2605c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-tfhpf" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.045890 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-2czqg"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.046035 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-7x2rm" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.050590 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.050605 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-w9nlv"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.050825 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-2czqg" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.053527 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-x78dg"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.054621 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-w9nlv" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.057428 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-kprrg"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.057572 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-x78dg" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.061337 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-fscmd"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.061409 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-kprrg" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.064013 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-dpf6p"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.064097 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-fscmd" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.066888 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484705-g489w"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.066949 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-dpf6p" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.070278 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.070464 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-dp8rm"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.070636 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-g489w" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.075146 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-bbphb"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.075281 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-dp8rm" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.077570 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-fhxb8"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.077695 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-bbphb" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.080194 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-9hjpw"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.080286 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-fhxb8" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.082629 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-7ghwq"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.082756 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-9hjpw" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.084828 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-74545575db-llz79"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.084903 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-7ghwq" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.087105 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-tfhpf"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.087134 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-xw8v9"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.087147 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-7smqb"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.087158 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-bg8p2"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.087170 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-x2rhp"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.087181 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-xmvfk"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.087196 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-mddkn"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.087358 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-llz79" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.092278 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-8wqc7"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.092435 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-mddkn" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.096464 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-6q5kp"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.096498 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-p98m2"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.096513 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-fzgnb"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.096526 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-btnnz"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.096535 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-rkbh2"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.096547 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-v89hk"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.096586 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-8wqc7" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.096655 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-lsqq6"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.099538 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-6q7wl"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.099561 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-mngf2"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.099572 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-d4ftw"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.099655 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-lsqq6" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.100023 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bebd6777-9b90-4b62-a3a9-360290cb39a9-audit-dir\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.099938 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bebd6777-9b90-4b62-a3a9-360290cb39a9-audit-dir\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.100081 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65e3321-2af5-4ab7-8765-36af9f3ecc9e-config\") pod \"etcd-operator-69b85846b6-r4999\" (UID: \"f65e3321-2af5-4ab7-8765-36af9f3ecc9e\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r4999" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.100099 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9g2hf\" (UniqueName: \"kubernetes.io/projected/42d89f76-66b8-4ffa-a63e-13582811b819-kube-api-access-9g2hf\") pod \"openshift-apiserver-operator-846cbfc458-6q7wl\" (UID: \"42d89f76-66b8-4ffa-a63e-13582811b819\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-6q7wl" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.100160 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.100182 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e16334d5-3fa8-48de-a8e0-af1f9fa51926-registry-tls\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.100296 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e16334d5-3fa8-48de-a8e0-af1f9fa51926-trusted-ca\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.100374 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9af7812b-a785-44ec-a8eb-eb72b9958b01-serving-cert\") pod \"authentication-operator-7f5c659b84-v89hk\" (UID: \"9af7812b-a785-44ec-a8eb-eb72b9958b01\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-v89hk" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.100443 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ld95q\" (UniqueName: \"kubernetes.io/projected/ea345128-daaf-464a-b774-8f8cf4c34aa5-kube-api-access-ld95q\") pod \"openshift-config-operator-5777786469-rkbh2\" (UID: \"ea345128-daaf-464a-b774-8f8cf4c34aa5\") " pod="openshift-config-operator/openshift-config-operator-5777786469-rkbh2" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.100475 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.100556 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65e3321-2af5-4ab7-8765-36af9f3ecc9e-serving-cert\") pod \"etcd-operator-69b85846b6-r4999\" (UID: \"f65e3321-2af5-4ab7-8765-36af9f3ecc9e\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r4999" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.100583 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bebd6777-9b90-4b62-a3a9-360290cb39a9-audit-policies\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.100614 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/699a5d41-d0b5-4d88-9448-4b3bad2cc424-metrics-tls\") pod \"dns-operator-799b87ffcd-p98m2\" (UID: \"699a5d41-d0b5-4d88-9448-4b3bad2cc424\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-p98m2" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.100637 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea345128-daaf-464a-b774-8f8cf4c34aa5-serving-cert\") pod \"openshift-config-operator-5777786469-rkbh2\" (UID: \"ea345128-daaf-464a-b774-8f8cf4c34aa5\") " pod="openshift-config-operator/openshift-config-operator-5777786469-rkbh2" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.100673 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.100721 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/699a5d41-d0b5-4d88-9448-4b3bad2cc424-tmp-dir\") pod \"dns-operator-799b87ffcd-p98m2\" (UID: \"699a5d41-d0b5-4d88-9448-4b3bad2cc424\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-p98m2" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.100747 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.100802 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mfb4z\" (UniqueName: \"kubernetes.io/projected/a1372d1c-9557-4da9-b571-ea78602f491f-kube-api-access-mfb4z\") pod \"downloads-747b44746d-btnnz\" (UID: \"a1372d1c-9557-4da9-b571-ea78602f491f\") " pod="openshift-console/downloads-747b44746d-btnnz" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.100832 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e16334d5-3fa8-48de-a8e0-af1f9fa51926-installation-pull-secrets\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.100868 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f65e3321-2af5-4ab7-8765-36af9f3ecc9e-etcd-client\") pod \"etcd-operator-69b85846b6-r4999\" (UID: \"f65e3321-2af5-4ab7-8765-36af9f3ecc9e\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r4999" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.100895 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-26kbp\" (UniqueName: \"kubernetes.io/projected/9af7812b-a785-44ec-a8eb-eb72b9958b01-kube-api-access-26kbp\") pod \"authentication-operator-7f5c659b84-v89hk\" (UID: \"9af7812b-a785-44ec-a8eb-eb72b9958b01\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-v89hk" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.100929 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42d89f76-66b8-4ffa-a63e-13582811b819-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-6q7wl\" (UID: \"42d89f76-66b8-4ffa-a63e-13582811b819\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-6q7wl" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.100968 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.100992 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.101019 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dgrjt\" (UniqueName: \"kubernetes.io/projected/bebd6777-9b90-4b62-a3a9-360290cb39a9-kube-api-access-dgrjt\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.101052 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.101087 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/f65e3321-2af5-4ab7-8765-36af9f3ecc9e-etcd-service-ca\") pod \"etcd-operator-69b85846b6-r4999\" (UID: \"f65e3321-2af5-4ab7-8765-36af9f3ecc9e\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r4999" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.101132 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.101158 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e16334d5-3fa8-48de-a8e0-af1f9fa51926-registry-certificates\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.101191 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65e3321-2af5-4ab7-8765-36af9f3ecc9e-tmp-dir\") pod \"etcd-operator-69b85846b6-r4999\" (UID: \"f65e3321-2af5-4ab7-8765-36af9f3ecc9e\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r4999" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.101232 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e16334d5-3fa8-48de-a8e0-af1f9fa51926-bound-sa-token\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.101262 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2kw26\" (UniqueName: \"kubernetes.io/projected/f65e3321-2af5-4ab7-8765-36af9f3ecc9e-kube-api-access-2kw26\") pod \"etcd-operator-69b85846b6-r4999\" (UID: \"f65e3321-2af5-4ab7-8765-36af9f3ecc9e\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r4999" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.101291 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42d89f76-66b8-4ffa-a63e-13582811b819-config\") pod \"openshift-apiserver-operator-846cbfc458-6q7wl\" (UID: \"42d89f76-66b8-4ffa-a63e-13582811b819\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-6q7wl" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.101320 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mg7w\" (UniqueName: \"kubernetes.io/projected/e16334d5-3fa8-48de-a8e0-af1f9fa51926-kube-api-access-5mg7w\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.101346 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9af7812b-a785-44ec-a8eb-eb72b9958b01-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-v89hk\" (UID: \"9af7812b-a785-44ec-a8eb-eb72b9958b01\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-v89hk" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.101370 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.101403 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.101436 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9af7812b-a785-44ec-a8eb-eb72b9958b01-config\") pod \"authentication-operator-7f5c659b84-v89hk\" (UID: \"9af7812b-a785-44ec-a8eb-eb72b9958b01\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-v89hk" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.101483 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.101485 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bebd6777-9b90-4b62-a3a9-360290cb39a9-audit-policies\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.101571 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e16334d5-3fa8-48de-a8e0-af1f9fa51926-ca-trust-extracted\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.101641 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/ea345128-daaf-464a-b774-8f8cf4c34aa5-available-featuregates\") pod \"openshift-config-operator-5777786469-rkbh2\" (UID: \"ea345128-daaf-464a-b774-8f8cf4c34aa5\") " pod="openshift-config-operator/openshift-config-operator-5777786469-rkbh2" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.101666 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.101701 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9af7812b-a785-44ec-a8eb-eb72b9958b01-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-v89hk\" (UID: \"9af7812b-a785-44ec-a8eb-eb72b9958b01\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-v89hk" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.101726 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/f65e3321-2af5-4ab7-8765-36af9f3ecc9e-etcd-ca\") pod \"etcd-operator-69b85846b6-r4999\" (UID: \"f65e3321-2af5-4ab7-8765-36af9f3ecc9e\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r4999" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.101745 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5rdkp\" (UniqueName: \"kubernetes.io/projected/699a5d41-d0b5-4d88-9448-4b3bad2cc424-kube-api-access-5rdkp\") pod \"dns-operator-799b87ffcd-p98m2\" (UID: \"699a5d41-d0b5-4d88-9448-4b3bad2cc424\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-p98m2" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.102146 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/ea345128-daaf-464a-b774-8f8cf4c34aa5-available-featuregates\") pod \"openshift-config-operator-5777786469-rkbh2\" (UID: \"ea345128-daaf-464a-b774-8f8cf4c34aa5\") " pod="openshift-config-operator/openshift-config-operator-5777786469-rkbh2" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.102774 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.102923 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-25dsq"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.102972 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-r4999"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.102991 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-8p7x7"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.103002 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-j9r4l"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.103014 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484705-g489w"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.103026 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-bbphb"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.103038 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-llz79"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.103048 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-nzfjl"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.103062 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-7q8jr"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.103074 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-x78dg"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.103083 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-fscmd"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.103091 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-9hjpw"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.103100 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-lsqq6"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.103110 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-7nx8w"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.103108 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42d89f76-66b8-4ffa-a63e-13582811b819-config\") pod \"openshift-apiserver-operator-846cbfc458-6q7wl\" (UID: \"42d89f76-66b8-4ffa-a63e-13582811b819\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-6q7wl" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.103120 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-dc6zt"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.103175 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-49gkx"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.103194 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-fhxb8"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.103206 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-d4ftw"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.103215 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-2czqg"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.103225 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-kprrg"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.103235 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-w9nlv"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.103246 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-lfqzp"] Jan 22 11:49:57 crc kubenswrapper[5120]: E0122 11:49:57.103382 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:49:57.603368549 +0000 UTC m=+132.347316890 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.103551 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9af7812b-a785-44ec-a8eb-eb72b9958b01-config\") pod \"authentication-operator-7f5c659b84-v89hk\" (UID: \"9af7812b-a785-44ec-a8eb-eb72b9958b01\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-v89hk" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.104056 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65e3321-2af5-4ab7-8765-36af9f3ecc9e-tmp-dir\") pod \"etcd-operator-69b85846b6-r4999\" (UID: \"f65e3321-2af5-4ab7-8765-36af9f3ecc9e\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r4999" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.104177 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.104319 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-d4ftw" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.105304 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/699a5d41-d0b5-4d88-9448-4b3bad2cc424-metrics-tls\") pod \"dns-operator-799b87ffcd-p98m2\" (UID: \"699a5d41-d0b5-4d88-9448-4b3bad2cc424\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-p98m2" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.106290 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9af7812b-a785-44ec-a8eb-eb72b9958b01-serving-cert\") pod \"authentication-operator-7f5c659b84-v89hk\" (UID: \"9af7812b-a785-44ec-a8eb-eb72b9958b01\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-v89hk" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.106802 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.107519 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9af7812b-a785-44ec-a8eb-eb72b9958b01-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-v89hk\" (UID: \"9af7812b-a785-44ec-a8eb-eb72b9958b01\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-v89hk" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.106813 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/699a5d41-d0b5-4d88-9448-4b3bad2cc424-tmp-dir\") pod \"dns-operator-799b87ffcd-p98m2\" (UID: \"699a5d41-d0b5-4d88-9448-4b3bad2cc424\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-p98m2" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.107675 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea345128-daaf-464a-b774-8f8cf4c34aa5-serving-cert\") pod \"openshift-config-operator-5777786469-rkbh2\" (UID: \"ea345128-daaf-464a-b774-8f8cf4c34aa5\") " pod="openshift-config-operator/openshift-config-operator-5777786469-rkbh2" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.108304 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.108619 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.113394 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42d89f76-66b8-4ffa-a63e-13582811b819-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-6q7wl\" (UID: \"42d89f76-66b8-4ffa-a63e-13582811b819\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-6q7wl" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.113479 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.113490 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.102969 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9af7812b-a785-44ec-a8eb-eb72b9958b01-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-v89hk\" (UID: \"9af7812b-a785-44ec-a8eb-eb72b9958b01\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-v89hk" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.113868 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.114014 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-8wqc7"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.114052 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-dp8rm"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.114065 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-7ghwq"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.114076 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-dpf6p"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.114129 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.114140 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.114254 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-lfqzp" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.114364 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.114643 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.130506 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.150531 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.176918 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.190406 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.202237 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:49:57 crc kubenswrapper[5120]: E0122 11:49:57.202396 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:49:57.702369015 +0000 UTC m=+132.446317376 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.202476 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e16334d5-3fa8-48de-a8e0-af1f9fa51926-registry-tls\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.202510 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e16334d5-3fa8-48de-a8e0-af1f9fa51926-trusted-ca\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.202533 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/fbaf6c98-c3db-488e-878a-d0b1b9779ea2-tmp-dir\") pod \"kube-apiserver-operator-575994946d-j9r4l\" (UID: \"fbaf6c98-c3db-488e-878a-d0b1b9779ea2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-j9r4l" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.202572 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/48ce43ae-5f5f-4ae6-91bd-98390a12c650-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-mddkn\" (UID: \"48ce43ae-5f5f-4ae6-91bd-98390a12c650\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mddkn" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.202591 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rkxp\" (UniqueName: \"kubernetes.io/projected/a909382a-a9be-43ea-b525-c382d3d7dac9-kube-api-access-8rkxp\") pod \"machine-config-server-lfqzp\" (UID: \"a909382a-a9be-43ea-b525-c382d3d7dac9\") " pod="openshift-machine-config-operator/machine-config-server-lfqzp" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.202608 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/bdf4dfdb-f473-480e-ae44-570e99cf695f-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-7nx8w\" (UID: \"bdf4dfdb-f473-480e-ae44-570e99cf695f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-7nx8w" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.202624 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/91b3eb8a-7090-484d-ae8f-8bbe990bce4d-srv-cert\") pod \"catalog-operator-75ff9f647d-fscmd\" (UID: \"91b3eb8a-7090-484d-ae8f-8bbe990bce4d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-fscmd" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.202641 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtrk4\" (UniqueName: \"kubernetes.io/projected/5e1bcfb8-8fae-4947-a078-c38b69596998-kube-api-access-rtrk4\") pod \"router-default-68cf44c8b8-7x2rm\" (UID: \"5e1bcfb8-8fae-4947-a078-c38b69596998\") " pod="openshift-ingress/router-default-68cf44c8b8-7x2rm" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.202659 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c5f50cf9-ffda-418c-a80d-9612ce61d429-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-2czqg\" (UID: \"c5f50cf9-ffda-418c-a80d-9612ce61d429\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-2czqg" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.202694 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/a909382a-a9be-43ea-b525-c382d3d7dac9-certs\") pod \"machine-config-server-lfqzp\" (UID: \"a909382a-a9be-43ea-b525-c382d3d7dac9\") " pod="openshift-machine-config-operator/machine-config-server-lfqzp" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.202711 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fp9hf\" (UniqueName: \"kubernetes.io/projected/503a8f02-4faa-4c71-a07b-e5cf7e21fd01-kube-api-access-fp9hf\") pod \"ingress-canary-8wqc7\" (UID: \"503a8f02-4faa-4c71-a07b-e5cf7e21fd01\") " pod="openshift-ingress-canary/ingress-canary-8wqc7" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.202728 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/efec95f9-a526-41f9-bd7c-0d1bd2505eda-console-oauth-config\") pod \"console-64d44f6ddf-7q8jr\" (UID: \"efec95f9-a526-41f9-bd7c-0d1bd2505eda\") " pod="openshift-console/console-64d44f6ddf-7q8jr" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.202744 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/da2b1465-54c1-4a7d-8cb6-755b28e448b8-webhook-certs\") pod \"multus-admission-controller-69db94689b-dp8rm\" (UID: \"da2b1465-54c1-4a7d-8cb6-755b28e448b8\") " pod="openshift-multus/multus-admission-controller-69db94689b-dp8rm" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.202774 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2380d23f-8320-4c77-9936-215ff48a32c8-config-volume\") pod \"dns-default-d4ftw\" (UID: \"2380d23f-8320-4c77-9936-215ff48a32c8\") " pod="openshift-dns/dns-default-d4ftw" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.202794 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/17d1692e-e64c-415e-98c6-fc0e5c799fe0-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-dpf6p\" (UID: \"17d1692e-e64c-415e-98c6-fc0e5c799fe0\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-dpf6p" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.202813 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fbaf6c98-c3db-488e-878a-d0b1b9779ea2-serving-cert\") pod \"kube-apiserver-operator-575994946d-j9r4l\" (UID: \"fbaf6c98-c3db-488e-878a-d0b1b9779ea2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-j9r4l" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.202842 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2-mountpoint-dir\") pod \"csi-hostpathplugin-lsqq6\" (UID: \"d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2\") " pod="hostpath-provisioner/csi-hostpathplugin-lsqq6" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.202859 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhpd9\" (UniqueName: \"kubernetes.io/projected/c5f50cf9-ffda-418c-a80d-9612ce61d429-kube-api-access-dhpd9\") pod \"machine-config-operator-67c9d58cbb-2czqg\" (UID: \"c5f50cf9-ffda-418c-a80d-9612ce61d429\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-2czqg" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.202877 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6edfa4a4-fdb6-420f-ba3b-d984c4784817-profile-collector-cert\") pod \"olm-operator-5cdf44d969-x78dg\" (UID: \"6edfa4a4-fdb6-420f-ba3b-d984c4784817\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-x78dg" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.202896 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mljlf\" (UniqueName: \"kubernetes.io/projected/2380d23f-8320-4c77-9936-215ff48a32c8-kube-api-access-mljlf\") pod \"dns-default-d4ftw\" (UID: \"2380d23f-8320-4c77-9936-215ff48a32c8\") " pod="openshift-dns/dns-default-d4ftw" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.202924 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c5f50cf9-ffda-418c-a80d-9612ce61d429-images\") pod \"machine-config-operator-67c9d58cbb-2czqg\" (UID: \"c5f50cf9-ffda-418c-a80d-9612ce61d429\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-2czqg" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.202989 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5e1bcfb8-8fae-4947-a078-c38b69596998-stats-auth\") pod \"router-default-68cf44c8b8-7x2rm\" (UID: \"5e1bcfb8-8fae-4947-a078-c38b69596998\") " pod="openshift-ingress/router-default-68cf44c8b8-7x2rm" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.203009 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9a52cc8b-fb68-4b1d-b91d-576f5ff59968-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-8p7x7\" (UID: \"9a52cc8b-fb68-4b1d-b91d-576f5ff59968\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-8p7x7" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.203026 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/6edfa4a4-fdb6-420f-ba3b-d984c4784817-tmpfs\") pod \"olm-operator-5cdf44d969-x78dg\" (UID: \"6edfa4a4-fdb6-420f-ba3b-d984c4784817\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-x78dg" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.203042 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2-registration-dir\") pod \"csi-hostpathplugin-lsqq6\" (UID: \"d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2\") " pod="hostpath-provisioner/csi-hostpathplugin-lsqq6" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.203070 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e16334d5-3fa8-48de-a8e0-af1f9fa51926-registry-certificates\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.203088 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/17d1692e-e64c-415e-98c6-fc0e5c799fe0-tmp\") pod \"marketplace-operator-547dbd544d-dpf6p\" (UID: \"17d1692e-e64c-415e-98c6-fc0e5c799fe0\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-dpf6p" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.203106 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bdf4dfdb-f473-480e-ae44-570e99cf695f-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-7nx8w\" (UID: \"bdf4dfdb-f473-480e-ae44-570e99cf695f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-7nx8w" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.203120 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a52cc8b-fb68-4b1d-b91d-576f5ff59968-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-8p7x7\" (UID: \"9a52cc8b-fb68-4b1d-b91d-576f5ff59968\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-8p7x7" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.203141 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e16334d5-3fa8-48de-a8e0-af1f9fa51926-bound-sa-token\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.203159 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kl2wm\" (UniqueName: \"kubernetes.io/projected/2667e960-0d1a-4c78-97ea-b1852f27ce17-kube-api-access-kl2wm\") pod \"collect-profiles-29484705-g489w\" (UID: \"2667e960-0d1a-4c78-97ea-b1852f27ce17\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-g489w" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.203176 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7b273aff-e733-49a9-a191-88b0380500eb-apiservice-cert\") pod \"packageserver-7d4fc7d867-bbphb\" (UID: \"7b273aff-e733-49a9-a191-88b0380500eb\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-bbphb" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.203191 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e1bcfb8-8fae-4947-a078-c38b69596998-service-ca-bundle\") pod \"router-default-68cf44c8b8-7x2rm\" (UID: \"5e1bcfb8-8fae-4947-a078-c38b69596998\") " pod="openshift-ingress/router-default-68cf44c8b8-7x2rm" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.203220 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5mg7w\" (UniqueName: \"kubernetes.io/projected/e16334d5-3fa8-48de-a8e0-af1f9fa51926-kube-api-access-5mg7w\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.203236 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2-socket-dir\") pod \"csi-hostpathplugin-lsqq6\" (UID: \"d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2\") " pod="hostpath-provisioner/csi-hostpathplugin-lsqq6" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.203278 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.203294 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/503a8f02-4faa-4c71-a07b-e5cf7e21fd01-cert\") pod \"ingress-canary-8wqc7\" (UID: \"503a8f02-4faa-4c71-a07b-e5cf7e21fd01\") " pod="openshift-ingress-canary/ingress-canary-8wqc7" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.203325 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2rv6\" (UniqueName: \"kubernetes.io/projected/f7fc5383-db19-483a-afb9-23d3f8065a64-kube-api-access-n2rv6\") pod \"machine-config-controller-f9cdd68f7-kprrg\" (UID: \"f7fc5383-db19-483a-afb9-23d3f8065a64\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-kprrg" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.203342 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/061945e1-c5cb-4451-94ff-0fd4a53b4901-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-nzfjl\" (UID: \"061945e1-c5cb-4451-94ff-0fd4a53b4901\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-nzfjl" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.203359 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/52bf18ab-85c0-49e5-8b9d-9cb67ec54297-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-9hjpw\" (UID: \"52bf18ab-85c0-49e5-8b9d-9cb67ec54297\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-9hjpw" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.203376 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/efec95f9-a526-41f9-bd7c-0d1bd2505eda-console-config\") pod \"console-64d44f6ddf-7q8jr\" (UID: \"efec95f9-a526-41f9-bd7c-0d1bd2505eda\") " pod="openshift-console/console-64d44f6ddf-7q8jr" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.203399 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hdgb\" (UniqueName: \"kubernetes.io/projected/62b5ce4a-8844-4e22-8bf1-f1f89efa16f9-kube-api-access-2hdgb\") pod \"kube-storage-version-migrator-operator-565b79b866-w9nlv\" (UID: \"62b5ce4a-8844-4e22-8bf1-f1f89efa16f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-w9nlv" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.203420 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5e1bcfb8-8fae-4947-a078-c38b69596998-default-certificate\") pod \"router-default-68cf44c8b8-7x2rm\" (UID: \"5e1bcfb8-8fae-4947-a078-c38b69596998\") " pod="openshift-ingress/router-default-68cf44c8b8-7x2rm" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.203444 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gw44v\" (UniqueName: \"kubernetes.io/projected/3cc31b0e-b225-470f-870b-f89666eae47b-kube-api-access-gw44v\") pod \"control-plane-machine-set-operator-75ffdb6fcd-fhxb8\" (UID: \"3cc31b0e-b225-470f-870b-f89666eae47b\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-fhxb8" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.203471 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e18ea538-05bd-4b11-b4ac-8cb8c0c9aef7-config\") pod \"service-ca-operator-5b9c976747-7ghwq\" (UID: \"e18ea538-05bd-4b11-b4ac-8cb8c0c9aef7\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-7ghwq" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.203490 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a52cc8b-fb68-4b1d-b91d-576f5ff59968-config\") pod \"openshift-kube-scheduler-operator-54f497555d-8p7x7\" (UID: \"9a52cc8b-fb68-4b1d-b91d-576f5ff59968\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-8p7x7" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.203506 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/17d1692e-e64c-415e-98c6-fc0e5c799fe0-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-dpf6p\" (UID: \"17d1692e-e64c-415e-98c6-fc0e5c799fe0\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-dpf6p" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.203541 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7qm6\" (UniqueName: \"kubernetes.io/projected/da2b1465-54c1-4a7d-8cb6-755b28e448b8-kube-api-access-s7qm6\") pod \"multus-admission-controller-69db94689b-dp8rm\" (UID: \"da2b1465-54c1-4a7d-8cb6-755b28e448b8\") " pod="openshift-multus/multus-admission-controller-69db94689b-dp8rm" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.203561 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62b5ce4a-8844-4e22-8bf1-f1f89efa16f9-config\") pod \"kube-storage-version-migrator-operator-565b79b866-w9nlv\" (UID: \"62b5ce4a-8844-4e22-8bf1-f1f89efa16f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-w9nlv" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.203579 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/48ce43ae-5f5f-4ae6-91bd-98390a12c650-ready\") pod \"cni-sysctl-allowlist-ds-mddkn\" (UID: \"48ce43ae-5f5f-4ae6-91bd-98390a12c650\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mddkn" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.203599 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fbaf6c98-c3db-488e-878a-d0b1b9779ea2-kube-api-access\") pod \"kube-apiserver-operator-575994946d-j9r4l\" (UID: \"fbaf6c98-c3db-488e-878a-d0b1b9779ea2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-j9r4l" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.203617 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcskr\" (UniqueName: \"kubernetes.io/projected/e18ea538-05bd-4b11-b4ac-8cb8c0c9aef7-kube-api-access-jcskr\") pod \"service-ca-operator-5b9c976747-7ghwq\" (UID: \"e18ea538-05bd-4b11-b4ac-8cb8c0c9aef7\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-7ghwq" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.203698 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/91b3eb8a-7090-484d-ae8f-8bbe990bce4d-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-fscmd\" (UID: \"91b3eb8a-7090-484d-ae8f-8bbe990bce4d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-fscmd" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.203737 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/91b3eb8a-7090-484d-ae8f-8bbe990bce4d-tmpfs\") pod \"catalog-operator-75ff9f647d-fscmd\" (UID: \"91b3eb8a-7090-484d-ae8f-8bbe990bce4d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-fscmd" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.203769 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/efec95f9-a526-41f9-bd7c-0d1bd2505eda-trusted-ca-bundle\") pod \"console-64d44f6ddf-7q8jr\" (UID: \"efec95f9-a526-41f9-bd7c-0d1bd2505eda\") " pod="openshift-console/console-64d44f6ddf-7q8jr" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.203800 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc7kn\" (UniqueName: \"kubernetes.io/projected/efec95f9-a526-41f9-bd7c-0d1bd2505eda-kube-api-access-rc7kn\") pod \"console-64d44f6ddf-7q8jr\" (UID: \"efec95f9-a526-41f9-bd7c-0d1bd2505eda\") " pod="openshift-console/console-64d44f6ddf-7q8jr" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.204421 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f7fc5383-db19-483a-afb9-23d3f8065a64-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-kprrg\" (UID: \"f7fc5383-db19-483a-afb9-23d3f8065a64\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-kprrg" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.204454 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/a909382a-a9be-43ea-b525-c382d3d7dac9-node-bootstrap-token\") pod \"machine-config-server-lfqzp\" (UID: \"a909382a-a9be-43ea-b525-c382d3d7dac9\") " pod="openshift-machine-config-operator/machine-config-server-lfqzp" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.204490 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7b273aff-e733-49a9-a191-88b0380500eb-webhook-cert\") pod \"packageserver-7d4fc7d867-bbphb\" (UID: \"7b273aff-e733-49a9-a191-88b0380500eb\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-bbphb" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.204538 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2667e960-0d1a-4c78-97ea-b1852f27ce17-config-volume\") pod \"collect-profiles-29484705-g489w\" (UID: \"2667e960-0d1a-4c78-97ea-b1852f27ce17\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-g489w" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.204562 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/efec95f9-a526-41f9-bd7c-0d1bd2505eda-console-serving-cert\") pod \"console-64d44f6ddf-7q8jr\" (UID: \"efec95f9-a526-41f9-bd7c-0d1bd2505eda\") " pod="openshift-console/console-64d44f6ddf-7q8jr" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.204595 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2667e960-0d1a-4c78-97ea-b1852f27ce17-secret-volume\") pod \"collect-profiles-29484705-g489w\" (UID: \"2667e960-0d1a-4c78-97ea-b1852f27ce17\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-g489w" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.204619 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2-plugins-dir\") pod \"csi-hostpathplugin-lsqq6\" (UID: \"d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2\") " pod="hostpath-provisioner/csi-hostpathplugin-lsqq6" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.204683 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbaf6c98-c3db-488e-878a-d0b1b9779ea2-config\") pod \"kube-apiserver-operator-575994946d-j9r4l\" (UID: \"fbaf6c98-c3db-488e-878a-d0b1b9779ea2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-j9r4l" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.204709 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqccv\" (UniqueName: \"kubernetes.io/projected/061945e1-c5cb-4451-94ff-0fd4a53b4901-kube-api-access-pqccv\") pod \"ingress-operator-6b9cb4dbcf-nzfjl\" (UID: \"061945e1-c5cb-4451-94ff-0fd4a53b4901\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-nzfjl" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.204732 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdf4dfdb-f473-480e-ae44-570e99cf695f-config\") pod \"kube-controller-manager-operator-69d5f845f8-7nx8w\" (UID: \"bdf4dfdb-f473-480e-ae44-570e99cf695f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-7nx8w" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.204758 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fdm8\" (UniqueName: \"kubernetes.io/projected/17d1692e-e64c-415e-98c6-fc0e5c799fe0-kube-api-access-2fdm8\") pod \"marketplace-operator-547dbd544d-dpf6p\" (UID: \"17d1692e-e64c-415e-98c6-fc0e5c799fe0\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-dpf6p" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.204801 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f7fc5383-db19-483a-afb9-23d3f8065a64-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-kprrg\" (UID: \"f7fc5383-db19-483a-afb9-23d3f8065a64\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-kprrg" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.204826 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljmv2\" (UniqueName: \"kubernetes.io/projected/52bf18ab-85c0-49e5-8b9d-9cb67ec54297-kube-api-access-ljmv2\") pod \"package-server-manager-77f986bd66-9hjpw\" (UID: \"52bf18ab-85c0-49e5-8b9d-9cb67ec54297\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-9hjpw" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.204830 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e16334d5-3fa8-48de-a8e0-af1f9fa51926-registry-certificates\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.204876 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5e1bcfb8-8fae-4947-a078-c38b69596998-metrics-certs\") pod \"router-default-68cf44c8b8-7x2rm\" (UID: \"5e1bcfb8-8fae-4947-a078-c38b69596998\") " pod="openshift-ingress/router-default-68cf44c8b8-7x2rm" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.204904 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9x68s\" (UniqueName: \"kubernetes.io/projected/d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2-kube-api-access-9x68s\") pod \"csi-hostpathplugin-lsqq6\" (UID: \"d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2\") " pod="hostpath-provisioner/csi-hostpathplugin-lsqq6" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.205001 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e16334d5-3fa8-48de-a8e0-af1f9fa51926-installation-pull-secrets\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.205030 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/48ce43ae-5f5f-4ae6-91bd-98390a12c650-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-mddkn\" (UID: \"48ce43ae-5f5f-4ae6-91bd-98390a12c650\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mddkn" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.205055 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/2380d23f-8320-4c77-9936-215ff48a32c8-tmp-dir\") pod \"dns-default-d4ftw\" (UID: \"2380d23f-8320-4c77-9936-215ff48a32c8\") " pod="openshift-dns/dns-default-d4ftw" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.205082 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7b273aff-e733-49a9-a191-88b0380500eb-tmpfs\") pod \"packageserver-7d4fc7d867-bbphb\" (UID: \"7b273aff-e733-49a9-a191-88b0380500eb\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-bbphb" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.205105 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6edfa4a4-fdb6-420f-ba3b-d984c4784817-srv-cert\") pod \"olm-operator-5cdf44d969-x78dg\" (UID: \"6edfa4a4-fdb6-420f-ba3b-d984c4784817\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-x78dg" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.205129 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9a52cc8b-fb68-4b1d-b91d-576f5ff59968-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-8p7x7\" (UID: \"9a52cc8b-fb68-4b1d-b91d-576f5ff59968\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-8p7x7" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.205173 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/efec95f9-a526-41f9-bd7c-0d1bd2505eda-oauth-serving-cert\") pod \"console-64d44f6ddf-7q8jr\" (UID: \"efec95f9-a526-41f9-bd7c-0d1bd2505eda\") " pod="openshift-console/console-64d44f6ddf-7q8jr" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.205222 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2380d23f-8320-4c77-9936-215ff48a32c8-metrics-tls\") pod \"dns-default-d4ftw\" (UID: \"2380d23f-8320-4c77-9936-215ff48a32c8\") " pod="openshift-dns/dns-default-d4ftw" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.205254 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3cc31b0e-b225-470f-870b-f89666eae47b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-fhxb8\" (UID: \"3cc31b0e-b225-470f-870b-f89666eae47b\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-fhxb8" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.205313 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxcrb\" (UniqueName: \"kubernetes.io/projected/6edfa4a4-fdb6-420f-ba3b-d984c4784817-kube-api-access-hxcrb\") pod \"olm-operator-5cdf44d969-x78dg\" (UID: \"6edfa4a4-fdb6-420f-ba3b-d984c4784817\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-x78dg" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.205349 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d92ccf27-d679-4304-98b0-a6e74c7ffda2-signing-cabundle\") pod \"service-ca-74545575db-llz79\" (UID: \"d92ccf27-d679-4304-98b0-a6e74c7ffda2\") " pod="openshift-service-ca/service-ca-74545575db-llz79" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.205385 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdjp5\" (UniqueName: \"kubernetes.io/projected/48ce43ae-5f5f-4ae6-91bd-98390a12c650-kube-api-access-mdjp5\") pod \"cni-sysctl-allowlist-ds-mddkn\" (UID: \"48ce43ae-5f5f-4ae6-91bd-98390a12c650\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mddkn" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.205417 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/061945e1-c5cb-4451-94ff-0fd4a53b4901-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-nzfjl\" (UID: \"061945e1-c5cb-4451-94ff-0fd4a53b4901\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-nzfjl" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.205449 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/061945e1-c5cb-4451-94ff-0fd4a53b4901-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-nzfjl\" (UID: \"061945e1-c5cb-4451-94ff-0fd4a53b4901\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-nzfjl" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.205482 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bdf4dfdb-f473-480e-ae44-570e99cf695f-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-7nx8w\" (UID: \"bdf4dfdb-f473-480e-ae44-570e99cf695f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-7nx8w" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.205518 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6g7g\" (UniqueName: \"kubernetes.io/projected/7b273aff-e733-49a9-a191-88b0380500eb-kube-api-access-k6g7g\") pod \"packageserver-7d4fc7d867-bbphb\" (UID: \"7b273aff-e733-49a9-a191-88b0380500eb\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-bbphb" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.205544 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2cpr\" (UniqueName: \"kubernetes.io/projected/d92ccf27-d679-4304-98b0-a6e74c7ffda2-kube-api-access-c2cpr\") pod \"service-ca-74545575db-llz79\" (UID: \"d92ccf27-d679-4304-98b0-a6e74c7ffda2\") " pod="openshift-service-ca/service-ca-74545575db-llz79" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.205576 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bszmq\" (UniqueName: \"kubernetes.io/projected/91b3eb8a-7090-484d-ae8f-8bbe990bce4d-kube-api-access-bszmq\") pod \"catalog-operator-75ff9f647d-fscmd\" (UID: \"91b3eb8a-7090-484d-ae8f-8bbe990bce4d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-fscmd" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.205600 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/efec95f9-a526-41f9-bd7c-0d1bd2505eda-service-ca\") pod \"console-64d44f6ddf-7q8jr\" (UID: \"efec95f9-a526-41f9-bd7c-0d1bd2505eda\") " pod="openshift-console/console-64d44f6ddf-7q8jr" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.205624 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5k8gv\" (UniqueName: \"kubernetes.io/projected/d245a73a-a6cb-488c-91aa-8b3020511b47-kube-api-access-5k8gv\") pod \"migrator-866fcbc849-dc6zt\" (UID: \"d245a73a-a6cb-488c-91aa-8b3020511b47\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-dc6zt" Jan 22 11:49:57 crc kubenswrapper[5120]: E0122 11:49:57.205860 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:49:57.70584815 +0000 UTC m=+132.449796711 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.205899 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e16334d5-3fa8-48de-a8e0-af1f9fa51926-ca-trust-extracted\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.205922 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d92ccf27-d679-4304-98b0-a6e74c7ffda2-signing-key\") pod \"service-ca-74545575db-llz79\" (UID: \"d92ccf27-d679-4304-98b0-a6e74c7ffda2\") " pod="openshift-service-ca/service-ca-74545575db-llz79" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.205944 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e18ea538-05bd-4b11-b4ac-8cb8c0c9aef7-serving-cert\") pod \"service-ca-operator-5b9c976747-7ghwq\" (UID: \"e18ea538-05bd-4b11-b4ac-8cb8c0c9aef7\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-7ghwq" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.205982 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c5f50cf9-ffda-418c-a80d-9612ce61d429-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-2czqg\" (UID: \"c5f50cf9-ffda-418c-a80d-9612ce61d429\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-2czqg" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.207347 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e16334d5-3fa8-48de-a8e0-af1f9fa51926-trusted-ca\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.207588 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/62b5ce4a-8844-4e22-8bf1-f1f89efa16f9-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-w9nlv\" (UID: \"62b5ce4a-8844-4e22-8bf1-f1f89efa16f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-w9nlv" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.207641 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2-csi-data-dir\") pod \"csi-hostpathplugin-lsqq6\" (UID: \"d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2\") " pod="hostpath-provisioner/csi-hostpathplugin-lsqq6" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.207765 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e16334d5-3fa8-48de-a8e0-af1f9fa51926-ca-trust-extracted\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.208748 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e16334d5-3fa8-48de-a8e0-af1f9fa51926-registry-tls\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.210509 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.212461 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e16334d5-3fa8-48de-a8e0-af1f9fa51926-installation-pull-secrets\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.213185 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/f65e3321-2af5-4ab7-8765-36af9f3ecc9e-etcd-ca\") pod \"etcd-operator-69b85846b6-r4999\" (UID: \"f65e3321-2af5-4ab7-8765-36af9f3ecc9e\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r4999" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.231054 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.250046 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.257605 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65e3321-2af5-4ab7-8765-36af9f3ecc9e-serving-cert\") pod \"etcd-operator-69b85846b6-r4999\" (UID: \"f65e3321-2af5-4ab7-8765-36af9f3ecc9e\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r4999" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.270911 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.280992 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f65e3321-2af5-4ab7-8765-36af9f3ecc9e-etcd-client\") pod \"etcd-operator-69b85846b6-r4999\" (UID: \"f65e3321-2af5-4ab7-8765-36af9f3ecc9e\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r4999" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.291379 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.294873 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/f65e3321-2af5-4ab7-8765-36af9f3ecc9e-etcd-service-ca\") pod \"etcd-operator-69b85846b6-r4999\" (UID: \"f65e3321-2af5-4ab7-8765-36af9f3ecc9e\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r4999" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.308423 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:49:57 crc kubenswrapper[5120]: E0122 11:49:57.308557 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:49:57.808532096 +0000 UTC m=+132.552480437 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.309792 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5e1bcfb8-8fae-4947-a078-c38b69596998-stats-auth\") pod \"router-default-68cf44c8b8-7x2rm\" (UID: \"5e1bcfb8-8fae-4947-a078-c38b69596998\") " pod="openshift-ingress/router-default-68cf44c8b8-7x2rm" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.310204 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9a52cc8b-fb68-4b1d-b91d-576f5ff59968-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-8p7x7\" (UID: \"9a52cc8b-fb68-4b1d-b91d-576f5ff59968\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-8p7x7" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.310334 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/6edfa4a4-fdb6-420f-ba3b-d984c4784817-tmpfs\") pod \"olm-operator-5cdf44d969-x78dg\" (UID: \"6edfa4a4-fdb6-420f-ba3b-d984c4784817\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-x78dg" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.310445 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2-registration-dir\") pod \"csi-hostpathplugin-lsqq6\" (UID: \"d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2\") " pod="hostpath-provisioner/csi-hostpathplugin-lsqq6" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.310577 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/17d1692e-e64c-415e-98c6-fc0e5c799fe0-tmp\") pod \"marketplace-operator-547dbd544d-dpf6p\" (UID: \"17d1692e-e64c-415e-98c6-fc0e5c799fe0\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-dpf6p" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.310680 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bdf4dfdb-f473-480e-ae44-570e99cf695f-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-7nx8w\" (UID: \"bdf4dfdb-f473-480e-ae44-570e99cf695f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-7nx8w" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.310886 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2-registration-dir\") pod \"csi-hostpathplugin-lsqq6\" (UID: \"d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2\") " pod="hostpath-provisioner/csi-hostpathplugin-lsqq6" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.310993 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a52cc8b-fb68-4b1d-b91d-576f5ff59968-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-8p7x7\" (UID: \"9a52cc8b-fb68-4b1d-b91d-576f5ff59968\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-8p7x7" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.310479 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.311149 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kl2wm\" (UniqueName: \"kubernetes.io/projected/2667e960-0d1a-4c78-97ea-b1852f27ce17-kube-api-access-kl2wm\") pod \"collect-profiles-29484705-g489w\" (UID: \"2667e960-0d1a-4c78-97ea-b1852f27ce17\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-g489w" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.311152 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/6edfa4a4-fdb6-420f-ba3b-d984c4784817-tmpfs\") pod \"olm-operator-5cdf44d969-x78dg\" (UID: \"6edfa4a4-fdb6-420f-ba3b-d984c4784817\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-x78dg" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.311363 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/17d1692e-e64c-415e-98c6-fc0e5c799fe0-tmp\") pod \"marketplace-operator-547dbd544d-dpf6p\" (UID: \"17d1692e-e64c-415e-98c6-fc0e5c799fe0\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-dpf6p" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.311608 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7b273aff-e733-49a9-a191-88b0380500eb-apiservice-cert\") pod \"packageserver-7d4fc7d867-bbphb\" (UID: \"7b273aff-e733-49a9-a191-88b0380500eb\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-bbphb" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.311712 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e1bcfb8-8fae-4947-a078-c38b69596998-service-ca-bundle\") pod \"router-default-68cf44c8b8-7x2rm\" (UID: \"5e1bcfb8-8fae-4947-a078-c38b69596998\") " pod="openshift-ingress/router-default-68cf44c8b8-7x2rm" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.312069 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2-socket-dir\") pod \"csi-hostpathplugin-lsqq6\" (UID: \"d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2\") " pod="hostpath-provisioner/csi-hostpathplugin-lsqq6" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.312192 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.312281 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/503a8f02-4faa-4c71-a07b-e5cf7e21fd01-cert\") pod \"ingress-canary-8wqc7\" (UID: \"503a8f02-4faa-4c71-a07b-e5cf7e21fd01\") " pod="openshift-ingress-canary/ingress-canary-8wqc7" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.312192 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2-socket-dir\") pod \"csi-hostpathplugin-lsqq6\" (UID: \"d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2\") " pod="hostpath-provisioner/csi-hostpathplugin-lsqq6" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.310612 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9a52cc8b-fb68-4b1d-b91d-576f5ff59968-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-8p7x7\" (UID: \"9a52cc8b-fb68-4b1d-b91d-576f5ff59968\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-8p7x7" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.312455 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n2rv6\" (UniqueName: \"kubernetes.io/projected/f7fc5383-db19-483a-afb9-23d3f8065a64-kube-api-access-n2rv6\") pod \"machine-config-controller-f9cdd68f7-kprrg\" (UID: \"f7fc5383-db19-483a-afb9-23d3f8065a64\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-kprrg" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.312532 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/061945e1-c5cb-4451-94ff-0fd4a53b4901-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-nzfjl\" (UID: \"061945e1-c5cb-4451-94ff-0fd4a53b4901\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-nzfjl" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.312602 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/52bf18ab-85c0-49e5-8b9d-9cb67ec54297-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-9hjpw\" (UID: \"52bf18ab-85c0-49e5-8b9d-9cb67ec54297\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-9hjpw" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.312669 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/efec95f9-a526-41f9-bd7c-0d1bd2505eda-console-config\") pod \"console-64d44f6ddf-7q8jr\" (UID: \"efec95f9-a526-41f9-bd7c-0d1bd2505eda\") " pod="openshift-console/console-64d44f6ddf-7q8jr" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.312748 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2hdgb\" (UniqueName: \"kubernetes.io/projected/62b5ce4a-8844-4e22-8bf1-f1f89efa16f9-kube-api-access-2hdgb\") pod \"kube-storage-version-migrator-operator-565b79b866-w9nlv\" (UID: \"62b5ce4a-8844-4e22-8bf1-f1f89efa16f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-w9nlv" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.313016 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5e1bcfb8-8fae-4947-a078-c38b69596998-default-certificate\") pod \"router-default-68cf44c8b8-7x2rm\" (UID: \"5e1bcfb8-8fae-4947-a078-c38b69596998\") " pod="openshift-ingress/router-default-68cf44c8b8-7x2rm" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.313092 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gw44v\" (UniqueName: \"kubernetes.io/projected/3cc31b0e-b225-470f-870b-f89666eae47b-kube-api-access-gw44v\") pod \"control-plane-machine-set-operator-75ffdb6fcd-fhxb8\" (UID: \"3cc31b0e-b225-470f-870b-f89666eae47b\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-fhxb8" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.313169 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e18ea538-05bd-4b11-b4ac-8cb8c0c9aef7-config\") pod \"service-ca-operator-5b9c976747-7ghwq\" (UID: \"e18ea538-05bd-4b11-b4ac-8cb8c0c9aef7\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-7ghwq" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.313270 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a52cc8b-fb68-4b1d-b91d-576f5ff59968-config\") pod \"openshift-kube-scheduler-operator-54f497555d-8p7x7\" (UID: \"9a52cc8b-fb68-4b1d-b91d-576f5ff59968\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-8p7x7" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.313354 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/17d1692e-e64c-415e-98c6-fc0e5c799fe0-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-dpf6p\" (UID: \"17d1692e-e64c-415e-98c6-fc0e5c799fe0\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-dpf6p" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.313445 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-s7qm6\" (UniqueName: \"kubernetes.io/projected/da2b1465-54c1-4a7d-8cb6-755b28e448b8-kube-api-access-s7qm6\") pod \"multus-admission-controller-69db94689b-dp8rm\" (UID: \"da2b1465-54c1-4a7d-8cb6-755b28e448b8\") " pod="openshift-multus/multus-admission-controller-69db94689b-dp8rm" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.313532 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62b5ce4a-8844-4e22-8bf1-f1f89efa16f9-config\") pod \"kube-storage-version-migrator-operator-565b79b866-w9nlv\" (UID: \"62b5ce4a-8844-4e22-8bf1-f1f89efa16f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-w9nlv" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.313601 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/48ce43ae-5f5f-4ae6-91bd-98390a12c650-ready\") pod \"cni-sysctl-allowlist-ds-mddkn\" (UID: \"48ce43ae-5f5f-4ae6-91bd-98390a12c650\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mddkn" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.313680 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fbaf6c98-c3db-488e-878a-d0b1b9779ea2-kube-api-access\") pod \"kube-apiserver-operator-575994946d-j9r4l\" (UID: \"fbaf6c98-c3db-488e-878a-d0b1b9779ea2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-j9r4l" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.313751 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jcskr\" (UniqueName: \"kubernetes.io/projected/e18ea538-05bd-4b11-b4ac-8cb8c0c9aef7-kube-api-access-jcskr\") pod \"service-ca-operator-5b9c976747-7ghwq\" (UID: \"e18ea538-05bd-4b11-b4ac-8cb8c0c9aef7\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-7ghwq" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.313843 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/91b3eb8a-7090-484d-ae8f-8bbe990bce4d-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-fscmd\" (UID: \"91b3eb8a-7090-484d-ae8f-8bbe990bce4d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-fscmd" Jan 22 11:49:57 crc kubenswrapper[5120]: E0122 11:49:57.313932 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:49:57.813916068 +0000 UTC m=+132.557864409 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.313989 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/91b3eb8a-7090-484d-ae8f-8bbe990bce4d-tmpfs\") pod \"catalog-operator-75ff9f647d-fscmd\" (UID: \"91b3eb8a-7090-484d-ae8f-8bbe990bce4d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-fscmd" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.314013 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/efec95f9-a526-41f9-bd7c-0d1bd2505eda-trusted-ca-bundle\") pod \"console-64d44f6ddf-7q8jr\" (UID: \"efec95f9-a526-41f9-bd7c-0d1bd2505eda\") " pod="openshift-console/console-64d44f6ddf-7q8jr" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.314030 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rc7kn\" (UniqueName: \"kubernetes.io/projected/efec95f9-a526-41f9-bd7c-0d1bd2505eda-kube-api-access-rc7kn\") pod \"console-64d44f6ddf-7q8jr\" (UID: \"efec95f9-a526-41f9-bd7c-0d1bd2505eda\") " pod="openshift-console/console-64d44f6ddf-7q8jr" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.314087 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f7fc5383-db19-483a-afb9-23d3f8065a64-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-kprrg\" (UID: \"f7fc5383-db19-483a-afb9-23d3f8065a64\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-kprrg" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.314105 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/a909382a-a9be-43ea-b525-c382d3d7dac9-node-bootstrap-token\") pod \"machine-config-server-lfqzp\" (UID: \"a909382a-a9be-43ea-b525-c382d3d7dac9\") " pod="openshift-machine-config-operator/machine-config-server-lfqzp" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.314093 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/48ce43ae-5f5f-4ae6-91bd-98390a12c650-ready\") pod \"cni-sysctl-allowlist-ds-mddkn\" (UID: \"48ce43ae-5f5f-4ae6-91bd-98390a12c650\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mddkn" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.314153 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7b273aff-e733-49a9-a191-88b0380500eb-webhook-cert\") pod \"packageserver-7d4fc7d867-bbphb\" (UID: \"7b273aff-e733-49a9-a191-88b0380500eb\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-bbphb" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.314177 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2667e960-0d1a-4c78-97ea-b1852f27ce17-config-volume\") pod \"collect-profiles-29484705-g489w\" (UID: \"2667e960-0d1a-4c78-97ea-b1852f27ce17\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-g489w" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.314193 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/efec95f9-a526-41f9-bd7c-0d1bd2505eda-console-serving-cert\") pod \"console-64d44f6ddf-7q8jr\" (UID: \"efec95f9-a526-41f9-bd7c-0d1bd2505eda\") " pod="openshift-console/console-64d44f6ddf-7q8jr" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.314243 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2667e960-0d1a-4c78-97ea-b1852f27ce17-secret-volume\") pod \"collect-profiles-29484705-g489w\" (UID: \"2667e960-0d1a-4c78-97ea-b1852f27ce17\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-g489w" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.314261 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2-plugins-dir\") pod \"csi-hostpathplugin-lsqq6\" (UID: \"d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2\") " pod="hostpath-provisioner/csi-hostpathplugin-lsqq6" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.314317 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbaf6c98-c3db-488e-878a-d0b1b9779ea2-config\") pod \"kube-apiserver-operator-575994946d-j9r4l\" (UID: \"fbaf6c98-c3db-488e-878a-d0b1b9779ea2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-j9r4l" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.314336 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pqccv\" (UniqueName: \"kubernetes.io/projected/061945e1-c5cb-4451-94ff-0fd4a53b4901-kube-api-access-pqccv\") pod \"ingress-operator-6b9cb4dbcf-nzfjl\" (UID: \"061945e1-c5cb-4451-94ff-0fd4a53b4901\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-nzfjl" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.314353 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdf4dfdb-f473-480e-ae44-570e99cf695f-config\") pod \"kube-controller-manager-operator-69d5f845f8-7nx8w\" (UID: \"bdf4dfdb-f473-480e-ae44-570e99cf695f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-7nx8w" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.314405 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2fdm8\" (UniqueName: \"kubernetes.io/projected/17d1692e-e64c-415e-98c6-fc0e5c799fe0-kube-api-access-2fdm8\") pod \"marketplace-operator-547dbd544d-dpf6p\" (UID: \"17d1692e-e64c-415e-98c6-fc0e5c799fe0\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-dpf6p" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.314427 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f7fc5383-db19-483a-afb9-23d3f8065a64-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-kprrg\" (UID: \"f7fc5383-db19-483a-afb9-23d3f8065a64\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-kprrg" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.314467 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ljmv2\" (UniqueName: \"kubernetes.io/projected/52bf18ab-85c0-49e5-8b9d-9cb67ec54297-kube-api-access-ljmv2\") pod \"package-server-manager-77f986bd66-9hjpw\" (UID: \"52bf18ab-85c0-49e5-8b9d-9cb67ec54297\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-9hjpw" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.314502 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5e1bcfb8-8fae-4947-a078-c38b69596998-metrics-certs\") pod \"router-default-68cf44c8b8-7x2rm\" (UID: \"5e1bcfb8-8fae-4947-a078-c38b69596998\") " pod="openshift-ingress/router-default-68cf44c8b8-7x2rm" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.314542 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/061945e1-c5cb-4451-94ff-0fd4a53b4901-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-nzfjl\" (UID: \"061945e1-c5cb-4451-94ff-0fd4a53b4901\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-nzfjl" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.314619 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9x68s\" (UniqueName: \"kubernetes.io/projected/d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2-kube-api-access-9x68s\") pod \"csi-hostpathplugin-lsqq6\" (UID: \"d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2\") " pod="hostpath-provisioner/csi-hostpathplugin-lsqq6" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.314663 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/48ce43ae-5f5f-4ae6-91bd-98390a12c650-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-mddkn\" (UID: \"48ce43ae-5f5f-4ae6-91bd-98390a12c650\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mddkn" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.314784 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/2380d23f-8320-4c77-9936-215ff48a32c8-tmp-dir\") pod \"dns-default-d4ftw\" (UID: \"2380d23f-8320-4c77-9936-215ff48a32c8\") " pod="openshift-dns/dns-default-d4ftw" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.314808 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7b273aff-e733-49a9-a191-88b0380500eb-tmpfs\") pod \"packageserver-7d4fc7d867-bbphb\" (UID: \"7b273aff-e733-49a9-a191-88b0380500eb\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-bbphb" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.314830 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6edfa4a4-fdb6-420f-ba3b-d984c4784817-srv-cert\") pod \"olm-operator-5cdf44d969-x78dg\" (UID: \"6edfa4a4-fdb6-420f-ba3b-d984c4784817\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-x78dg" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.314949 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9a52cc8b-fb68-4b1d-b91d-576f5ff59968-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-8p7x7\" (UID: \"9a52cc8b-fb68-4b1d-b91d-576f5ff59968\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-8p7x7" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.315001 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/efec95f9-a526-41f9-bd7c-0d1bd2505eda-oauth-serving-cert\") pod \"console-64d44f6ddf-7q8jr\" (UID: \"efec95f9-a526-41f9-bd7c-0d1bd2505eda\") " pod="openshift-console/console-64d44f6ddf-7q8jr" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.315021 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2380d23f-8320-4c77-9936-215ff48a32c8-metrics-tls\") pod \"dns-default-d4ftw\" (UID: \"2380d23f-8320-4c77-9936-215ff48a32c8\") " pod="openshift-dns/dns-default-d4ftw" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.315139 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f7fc5383-db19-483a-afb9-23d3f8065a64-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-kprrg\" (UID: \"f7fc5383-db19-483a-afb9-23d3f8065a64\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-kprrg" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.315146 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3cc31b0e-b225-470f-870b-f89666eae47b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-fhxb8\" (UID: \"3cc31b0e-b225-470f-870b-f89666eae47b\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-fhxb8" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.315234 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2-plugins-dir\") pod \"csi-hostpathplugin-lsqq6\" (UID: \"d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2\") " pod="hostpath-provisioner/csi-hostpathplugin-lsqq6" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.315442 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hxcrb\" (UniqueName: \"kubernetes.io/projected/6edfa4a4-fdb6-420f-ba3b-d984c4784817-kube-api-access-hxcrb\") pod \"olm-operator-5cdf44d969-x78dg\" (UID: \"6edfa4a4-fdb6-420f-ba3b-d984c4784817\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-x78dg" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.315541 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d92ccf27-d679-4304-98b0-a6e74c7ffda2-signing-cabundle\") pod \"service-ca-74545575db-llz79\" (UID: \"d92ccf27-d679-4304-98b0-a6e74c7ffda2\") " pod="openshift-service-ca/service-ca-74545575db-llz79" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.315698 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mdjp5\" (UniqueName: \"kubernetes.io/projected/48ce43ae-5f5f-4ae6-91bd-98390a12c650-kube-api-access-mdjp5\") pod \"cni-sysctl-allowlist-ds-mddkn\" (UID: \"48ce43ae-5f5f-4ae6-91bd-98390a12c650\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mddkn" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.316063 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/061945e1-c5cb-4451-94ff-0fd4a53b4901-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-nzfjl\" (UID: \"061945e1-c5cb-4451-94ff-0fd4a53b4901\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-nzfjl" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.316347 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/061945e1-c5cb-4451-94ff-0fd4a53b4901-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-nzfjl\" (UID: \"061945e1-c5cb-4451-94ff-0fd4a53b4901\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-nzfjl" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.316466 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bdf4dfdb-f473-480e-ae44-570e99cf695f-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-7nx8w\" (UID: \"bdf4dfdb-f473-480e-ae44-570e99cf695f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-7nx8w" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.316155 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/2380d23f-8320-4c77-9936-215ff48a32c8-tmp-dir\") pod \"dns-default-d4ftw\" (UID: \"2380d23f-8320-4c77-9936-215ff48a32c8\") " pod="openshift-dns/dns-default-d4ftw" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.316181 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/91b3eb8a-7090-484d-ae8f-8bbe990bce4d-tmpfs\") pod \"catalog-operator-75ff9f647d-fscmd\" (UID: \"91b3eb8a-7090-484d-ae8f-8bbe990bce4d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-fscmd" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.315564 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/48ce43ae-5f5f-4ae6-91bd-98390a12c650-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-mddkn\" (UID: \"48ce43ae-5f5f-4ae6-91bd-98390a12c650\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mddkn" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.316593 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdf4dfdb-f473-480e-ae44-570e99cf695f-config\") pod \"kube-controller-manager-operator-69d5f845f8-7nx8w\" (UID: \"bdf4dfdb-f473-480e-ae44-570e99cf695f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-7nx8w" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.315873 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7b273aff-e733-49a9-a191-88b0380500eb-tmpfs\") pod \"packageserver-7d4fc7d867-bbphb\" (UID: \"7b273aff-e733-49a9-a191-88b0380500eb\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-bbphb" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.316806 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-k6g7g\" (UniqueName: \"kubernetes.io/projected/7b273aff-e733-49a9-a191-88b0380500eb-kube-api-access-k6g7g\") pod \"packageserver-7d4fc7d867-bbphb\" (UID: \"7b273aff-e733-49a9-a191-88b0380500eb\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-bbphb" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.316901 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-c2cpr\" (UniqueName: \"kubernetes.io/projected/d92ccf27-d679-4304-98b0-a6e74c7ffda2-kube-api-access-c2cpr\") pod \"service-ca-74545575db-llz79\" (UID: \"d92ccf27-d679-4304-98b0-a6e74c7ffda2\") " pod="openshift-service-ca/service-ca-74545575db-llz79" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.317030 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bszmq\" (UniqueName: \"kubernetes.io/projected/91b3eb8a-7090-484d-ae8f-8bbe990bce4d-kube-api-access-bszmq\") pod \"catalog-operator-75ff9f647d-fscmd\" (UID: \"91b3eb8a-7090-484d-ae8f-8bbe990bce4d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-fscmd" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.317139 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/efec95f9-a526-41f9-bd7c-0d1bd2505eda-service-ca\") pod \"console-64d44f6ddf-7q8jr\" (UID: \"efec95f9-a526-41f9-bd7c-0d1bd2505eda\") " pod="openshift-console/console-64d44f6ddf-7q8jr" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.317247 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5k8gv\" (UniqueName: \"kubernetes.io/projected/d245a73a-a6cb-488c-91aa-8b3020511b47-kube-api-access-5k8gv\") pod \"migrator-866fcbc849-dc6zt\" (UID: \"d245a73a-a6cb-488c-91aa-8b3020511b47\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-dc6zt" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.317560 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d92ccf27-d679-4304-98b0-a6e74c7ffda2-signing-key\") pod \"service-ca-74545575db-llz79\" (UID: \"d92ccf27-d679-4304-98b0-a6e74c7ffda2\") " pod="openshift-service-ca/service-ca-74545575db-llz79" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.317657 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e18ea538-05bd-4b11-b4ac-8cb8c0c9aef7-serving-cert\") pod \"service-ca-operator-5b9c976747-7ghwq\" (UID: \"e18ea538-05bd-4b11-b4ac-8cb8c0c9aef7\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-7ghwq" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.318478 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c5f50cf9-ffda-418c-a80d-9612ce61d429-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-2czqg\" (UID: \"c5f50cf9-ffda-418c-a80d-9612ce61d429\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-2czqg" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.317752 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c5f50cf9-ffda-418c-a80d-9612ce61d429-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-2czqg\" (UID: \"c5f50cf9-ffda-418c-a80d-9612ce61d429\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-2czqg" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.319173 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/62b5ce4a-8844-4e22-8bf1-f1f89efa16f9-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-w9nlv\" (UID: \"62b5ce4a-8844-4e22-8bf1-f1f89efa16f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-w9nlv" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.319279 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2-csi-data-dir\") pod \"csi-hostpathplugin-lsqq6\" (UID: \"d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2\") " pod="hostpath-provisioner/csi-hostpathplugin-lsqq6" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.319419 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/fbaf6c98-c3db-488e-878a-d0b1b9779ea2-tmp-dir\") pod \"kube-apiserver-operator-575994946d-j9r4l\" (UID: \"fbaf6c98-c3db-488e-878a-d0b1b9779ea2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-j9r4l" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.319449 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2-csi-data-dir\") pod \"csi-hostpathplugin-lsqq6\" (UID: \"d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2\") " pod="hostpath-provisioner/csi-hostpathplugin-lsqq6" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.319712 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/fbaf6c98-c3db-488e-878a-d0b1b9779ea2-tmp-dir\") pod \"kube-apiserver-operator-575994946d-j9r4l\" (UID: \"fbaf6c98-c3db-488e-878a-d0b1b9779ea2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-j9r4l" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.319529 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/48ce43ae-5f5f-4ae6-91bd-98390a12c650-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-mddkn\" (UID: \"48ce43ae-5f5f-4ae6-91bd-98390a12c650\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mddkn" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.319897 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8rkxp\" (UniqueName: \"kubernetes.io/projected/a909382a-a9be-43ea-b525-c382d3d7dac9-kube-api-access-8rkxp\") pod \"machine-config-server-lfqzp\" (UID: \"a909382a-a9be-43ea-b525-c382d3d7dac9\") " pod="openshift-machine-config-operator/machine-config-server-lfqzp" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.319989 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/bdf4dfdb-f473-480e-ae44-570e99cf695f-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-7nx8w\" (UID: \"bdf4dfdb-f473-480e-ae44-570e99cf695f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-7nx8w" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.320062 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/91b3eb8a-7090-484d-ae8f-8bbe990bce4d-srv-cert\") pod \"catalog-operator-75ff9f647d-fscmd\" (UID: \"91b3eb8a-7090-484d-ae8f-8bbe990bce4d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-fscmd" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.320145 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rtrk4\" (UniqueName: \"kubernetes.io/projected/5e1bcfb8-8fae-4947-a078-c38b69596998-kube-api-access-rtrk4\") pod \"router-default-68cf44c8b8-7x2rm\" (UID: \"5e1bcfb8-8fae-4947-a078-c38b69596998\") " pod="openshift-ingress/router-default-68cf44c8b8-7x2rm" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.320214 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c5f50cf9-ffda-418c-a80d-9612ce61d429-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-2czqg\" (UID: \"c5f50cf9-ffda-418c-a80d-9612ce61d429\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-2czqg" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.320259 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/bdf4dfdb-f473-480e-ae44-570e99cf695f-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-7nx8w\" (UID: \"bdf4dfdb-f473-480e-ae44-570e99cf695f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-7nx8w" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.320102 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bdf4dfdb-f473-480e-ae44-570e99cf695f-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-7nx8w\" (UID: \"bdf4dfdb-f473-480e-ae44-570e99cf695f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-7nx8w" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.320314 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/061945e1-c5cb-4451-94ff-0fd4a53b4901-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-nzfjl\" (UID: \"061945e1-c5cb-4451-94ff-0fd4a53b4901\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-nzfjl" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.320550 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/a909382a-a9be-43ea-b525-c382d3d7dac9-certs\") pod \"machine-config-server-lfqzp\" (UID: \"a909382a-a9be-43ea-b525-c382d3d7dac9\") " pod="openshift-machine-config-operator/machine-config-server-lfqzp" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.320583 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fp9hf\" (UniqueName: \"kubernetes.io/projected/503a8f02-4faa-4c71-a07b-e5cf7e21fd01-kube-api-access-fp9hf\") pod \"ingress-canary-8wqc7\" (UID: \"503a8f02-4faa-4c71-a07b-e5cf7e21fd01\") " pod="openshift-ingress-canary/ingress-canary-8wqc7" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.320604 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/efec95f9-a526-41f9-bd7c-0d1bd2505eda-console-oauth-config\") pod \"console-64d44f6ddf-7q8jr\" (UID: \"efec95f9-a526-41f9-bd7c-0d1bd2505eda\") " pod="openshift-console/console-64d44f6ddf-7q8jr" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.320623 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/da2b1465-54c1-4a7d-8cb6-755b28e448b8-webhook-certs\") pod \"multus-admission-controller-69db94689b-dp8rm\" (UID: \"da2b1465-54c1-4a7d-8cb6-755b28e448b8\") " pod="openshift-multus/multus-admission-controller-69db94689b-dp8rm" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.320651 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2380d23f-8320-4c77-9936-215ff48a32c8-config-volume\") pod \"dns-default-d4ftw\" (UID: \"2380d23f-8320-4c77-9936-215ff48a32c8\") " pod="openshift-dns/dns-default-d4ftw" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.320678 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/17d1692e-e64c-415e-98c6-fc0e5c799fe0-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-dpf6p\" (UID: \"17d1692e-e64c-415e-98c6-fc0e5c799fe0\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-dpf6p" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.320700 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fbaf6c98-c3db-488e-878a-d0b1b9779ea2-serving-cert\") pod \"kube-apiserver-operator-575994946d-j9r4l\" (UID: \"fbaf6c98-c3db-488e-878a-d0b1b9779ea2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-j9r4l" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.320723 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2-mountpoint-dir\") pod \"csi-hostpathplugin-lsqq6\" (UID: \"d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2\") " pod="hostpath-provisioner/csi-hostpathplugin-lsqq6" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.320751 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dhpd9\" (UniqueName: \"kubernetes.io/projected/c5f50cf9-ffda-418c-a80d-9612ce61d429-kube-api-access-dhpd9\") pod \"machine-config-operator-67c9d58cbb-2czqg\" (UID: \"c5f50cf9-ffda-418c-a80d-9612ce61d429\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-2czqg" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.320777 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6edfa4a4-fdb6-420f-ba3b-d984c4784817-profile-collector-cert\") pod \"olm-operator-5cdf44d969-x78dg\" (UID: \"6edfa4a4-fdb6-420f-ba3b-d984c4784817\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-x78dg" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.320810 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mljlf\" (UniqueName: \"kubernetes.io/projected/2380d23f-8320-4c77-9936-215ff48a32c8-kube-api-access-mljlf\") pod \"dns-default-d4ftw\" (UID: \"2380d23f-8320-4c77-9936-215ff48a32c8\") " pod="openshift-dns/dns-default-d4ftw" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.320834 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c5f50cf9-ffda-418c-a80d-9612ce61d429-images\") pod \"machine-config-operator-67c9d58cbb-2czqg\" (UID: \"c5f50cf9-ffda-418c-a80d-9612ce61d429\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-2czqg" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.320894 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2-mountpoint-dir\") pod \"csi-hostpathplugin-lsqq6\" (UID: \"d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2\") " pod="hostpath-provisioner/csi-hostpathplugin-lsqq6" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.330878 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.350273 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.351326 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65e3321-2af5-4ab7-8765-36af9f3ecc9e-config\") pod \"etcd-operator-69b85846b6-r4999\" (UID: \"f65e3321-2af5-4ab7-8765-36af9f3ecc9e\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r4999" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.383233 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjndr\" (UniqueName: \"kubernetes.io/projected/e36a1cae-0915-45b1-abf9-2f44c78f3306-kube-api-access-wjndr\") pod \"route-controller-manager-776cdc94d6-fzgnb\" (UID: \"e36a1cae-0915-45b1-abf9-2f44c78f3306\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fzgnb" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.405632 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-54jwq\" (UniqueName: \"kubernetes.io/projected/fd113660-b734-4d86-be8d-b28c5e9a328f-kube-api-access-54jwq\") pod \"apiserver-9ddfb9f55-xmvfk\" (UID: \"fd113660-b734-4d86-be8d-b28c5e9a328f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.421661 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:49:57 crc kubenswrapper[5120]: E0122 11:49:57.421896 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:49:57.921869863 +0000 UTC m=+132.665818204 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.422471 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.422939 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" Jan 22 11:49:57 crc kubenswrapper[5120]: E0122 11:49:57.422999 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:49:57.922884858 +0000 UTC m=+132.666833209 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.445322 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzcsr\" (UniqueName: \"kubernetes.io/projected/007c14e3-9fa4-44aa-8d05-a57c4dc222a1-kube-api-access-kzcsr\") pod \"controller-manager-65b6cccf98-xw8v9\" (UID: \"007c14e3-9fa4-44aa-8d05-a57c4dc222a1\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-xw8v9" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.473848 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfx8n\" (UniqueName: \"kubernetes.io/projected/ba096274-efe0-462b-9a53-89e321166944-kube-api-access-dfx8n\") pod \"openshift-controller-manager-operator-686468bdd5-mngf2\" (UID: \"ba096274-efe0-462b-9a53-89e321166944\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-mngf2" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.484016 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5f67b\" (UniqueName: \"kubernetes.io/projected/0b427d7e-8e8a-4486-831a-aa6cc98f1b39-kube-api-access-5f67b\") pod \"cluster-image-registry-operator-86c45576b9-bg8p2\" (UID: \"0b427d7e-8e8a-4486-831a-aa6cc98f1b39\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-bg8p2" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.504731 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0b427d7e-8e8a-4486-831a-aa6cc98f1b39-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-bg8p2\" (UID: \"0b427d7e-8e8a-4486-831a-aa6cc98f1b39\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-bg8p2" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.523766 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:49:57 crc kubenswrapper[5120]: E0122 11:49:57.524272 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:49:58.024244291 +0000 UTC m=+132.768192632 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.524571 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-67mcj\" (UniqueName: \"kubernetes.io/projected/c07a3946-e1f2-458f-bc29-15741de2605c-kube-api-access-67mcj\") pod \"apiserver-8596bd845d-tfhpf\" (UID: \"c07a3946-e1f2-458f-bc29-15741de2605c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-tfhpf" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.533096 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-bg8p2" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.535018 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fzgnb" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.545038 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9h5dr\" (UniqueName: \"kubernetes.io/projected/eb2cf2b6-ca7b-4f75-ad62-7bb5e85aeea9-kube-api-access-9h5dr\") pod \"cluster-samples-operator-6b564684c8-7smqb\" (UID: \"eb2cf2b6-ca7b-4f75-ad62-7bb5e85aeea9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-7smqb" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.566104 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qnw8\" (UniqueName: \"kubernetes.io/projected/dfeef834-363c-4dff-a170-acd203607c65-kube-api-access-8qnw8\") pod \"machine-api-operator-755bb95488-x2rhp\" (UID: \"dfeef834-363c-4dff-a170-acd203607c65\") " pod="openshift-machine-api/machine-api-operator-755bb95488-x2rhp" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.586667 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-87vvb\" (UniqueName: \"kubernetes.io/projected/e2d50ff8-e389-4ca8-8a4f-6987db07ea3b-kube-api-access-87vvb\") pod \"machine-approver-54c688565-ll2j2\" (UID: \"e2d50ff8-e389-4ca8-8a4f-6987db07ea3b\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-ll2j2" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.611228 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.614177 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwltw\" (UniqueName: \"kubernetes.io/projected/ae478ef7-56ef-496c-b99c-4d952d5617b0-kube-api-access-kwltw\") pod \"console-operator-67c89758df-6q5kp\" (UID: \"ae478ef7-56ef-496c-b99c-4d952d5617b0\") " pod="openshift-console-operator/console-operator-67c89758df-6q5kp" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.625789 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:57 crc kubenswrapper[5120]: E0122 11:49:57.626132 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:49:58.126120198 +0000 UTC m=+132.870068539 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.640059 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.650675 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.660826 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a52cc8b-fb68-4b1d-b91d-576f5ff59968-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-8p7x7\" (UID: \"9a52cc8b-fb68-4b1d-b91d-576f5ff59968\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-8p7x7" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.671180 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.677919 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a52cc8b-fb68-4b1d-b91d-576f5ff59968-config\") pod \"openshift-kube-scheduler-operator-54f497555d-8p7x7\" (UID: \"9a52cc8b-fb68-4b1d-b91d-576f5ff59968\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-8p7x7" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.697428 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.708711 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-xw8v9" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.712434 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.732265 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:49:57 crc kubenswrapper[5120]: E0122 11:49:57.732439 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:49:58.232417613 +0000 UTC m=+132.976365954 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.732819 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.735161 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:57 crc kubenswrapper[5120]: E0122 11:49:57.735639 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:49:58.235626631 +0000 UTC m=+132.979574972 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.735639 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-xmvfk"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.735744 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-mngf2" Jan 22 11:49:57 crc kubenswrapper[5120]: W0122 11:49:57.744295 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd113660_b734_4d86_be8d_b28c5e9a328f.slice/crio-8929fdcecaae454196a9a31857dbede8e413a5afe9ac0bd3b4f3d7558cd1837b WatchSource:0}: Error finding container 8929fdcecaae454196a9a31857dbede8e413a5afe9ac0bd3b4f3d7558cd1837b: Status 404 returned error can't find the container with id 8929fdcecaae454196a9a31857dbede8e413a5afe9ac0bd3b4f3d7558cd1837b Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.747580 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fbaf6c98-c3db-488e-878a-d0b1b9779ea2-serving-cert\") pod \"kube-apiserver-operator-575994946d-j9r4l\" (UID: \"fbaf6c98-c3db-488e-878a-d0b1b9779ea2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-j9r4l" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.751609 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.756417 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbaf6c98-c3db-488e-878a-d0b1b9779ea2-config\") pod \"kube-apiserver-operator-575994946d-j9r4l\" (UID: \"fbaf6c98-c3db-488e-878a-d0b1b9779ea2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-j9r4l" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.765535 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-x2rhp" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.772354 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-tfhpf" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.772475 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.778820 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/efec95f9-a526-41f9-bd7c-0d1bd2505eda-oauth-serving-cert\") pod \"console-64d44f6ddf-7q8jr\" (UID: \"efec95f9-a526-41f9-bd7c-0d1bd2505eda\") " pod="openshift-console/console-64d44f6ddf-7q8jr" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.785126 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-ll2j2" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.791390 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.792012 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-fzgnb"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.798111 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-7smqb" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.811158 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.819465 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-bg8p2"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.829262 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/efec95f9-a526-41f9-bd7c-0d1bd2505eda-console-serving-cert\") pod \"console-64d44f6ddf-7q8jr\" (UID: \"efec95f9-a526-41f9-bd7c-0d1bd2505eda\") " pod="openshift-console/console-64d44f6ddf-7q8jr" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.830947 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.835536 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/efec95f9-a526-41f9-bd7c-0d1bd2505eda-console-oauth-config\") pod \"console-64d44f6ddf-7q8jr\" (UID: \"efec95f9-a526-41f9-bd7c-0d1bd2505eda\") " pod="openshift-console/console-64d44f6ddf-7q8jr" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.836045 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:49:57 crc kubenswrapper[5120]: E0122 11:49:57.836682 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:49:58.336667588 +0000 UTC m=+133.080615929 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:57 crc kubenswrapper[5120]: W0122 11:49:57.840058 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode36a1cae_0915_45b1_abf9_2f44c78f3306.slice/crio-2d59b64b6f345357f2908b0217e759f74cb8c56e84767dbef6ac59043f972d83 WatchSource:0}: Error finding container 2d59b64b6f345357f2908b0217e759f74cb8c56e84767dbef6ac59043f972d83: Status 404 returned error can't find the container with id 2d59b64b6f345357f2908b0217e759f74cb8c56e84767dbef6ac59043f972d83 Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.849009 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-6q5kp" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.851297 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.853596 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/efec95f9-a526-41f9-bd7c-0d1bd2505eda-console-config\") pod \"console-64d44f6ddf-7q8jr\" (UID: \"efec95f9-a526-41f9-bd7c-0d1bd2505eda\") " pod="openshift-console/console-64d44f6ddf-7q8jr" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.873474 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.878459 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/efec95f9-a526-41f9-bd7c-0d1bd2505eda-service-ca\") pod \"console-64d44f6ddf-7q8jr\" (UID: \"efec95f9-a526-41f9-bd7c-0d1bd2505eda\") " pod="openshift-console/console-64d44f6ddf-7q8jr" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.899220 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.910556 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/efec95f9-a526-41f9-bd7c-0d1bd2505eda-trusted-ca-bundle\") pod \"console-64d44f6ddf-7q8jr\" (UID: \"efec95f9-a526-41f9-bd7c-0d1bd2505eda\") " pod="openshift-console/console-64d44f6ddf-7q8jr" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.911979 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.934360 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.937668 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:57 crc kubenswrapper[5120]: E0122 11:49:57.938101 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:49:58.438088243 +0000 UTC m=+133.182036584 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.951661 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.972642 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.993551 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.995119 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5e1bcfb8-8fae-4947-a078-c38b69596998-stats-auth\") pod \"router-default-68cf44c8b8-7x2rm\" (UID: \"5e1bcfb8-8fae-4947-a078-c38b69596998\") " pod="openshift-ingress/router-default-68cf44c8b8-7x2rm" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.013600 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5e1bcfb8-8fae-4947-a078-c38b69596998-default-certificate\") pod \"router-default-68cf44c8b8-7x2rm\" (UID: \"5e1bcfb8-8fae-4947-a078-c38b69596998\") " pod="openshift-ingress/router-default-68cf44c8b8-7x2rm" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.016418 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.033726 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.044087 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.044629 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:49:58.544612773 +0000 UTC m=+133.288561114 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.054320 5120 request.go:752] "Waited before sending request" delay="1.00797109s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/secrets?fieldSelector=metadata.name%3Drouter-metrics-certs-default&limit=500&resourceVersion=0" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.055723 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.055776 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-xw8v9"] Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.062700 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5e1bcfb8-8fae-4947-a078-c38b69596998-metrics-certs\") pod \"router-default-68cf44c8b8-7x2rm\" (UID: \"5e1bcfb8-8fae-4947-a078-c38b69596998\") " pod="openshift-ingress/router-default-68cf44c8b8-7x2rm" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.073271 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.073867 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-mngf2"] Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.091094 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.099927 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e1bcfb8-8fae-4947-a078-c38b69596998-service-ca-bundle\") pod \"router-default-68cf44c8b8-7x2rm\" (UID: \"5e1bcfb8-8fae-4947-a078-c38b69596998\") " pod="openshift-ingress/router-default-68cf44c8b8-7x2rm" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.111367 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.112442 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c5f50cf9-ffda-418c-a80d-9612ce61d429-images\") pod \"machine-config-operator-67c9d58cbb-2czqg\" (UID: \"c5f50cf9-ffda-418c-a80d-9612ce61d429\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-2czqg" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.131090 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-tfhpf"] Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.131869 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Jan 22 11:49:58 crc kubenswrapper[5120]: W0122 11:49:58.133391 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba096274_efe0_462b_9a53_89e321166944.slice/crio-fc111f594610879311fb90d1c6ebb61327f8c1f99aa7e396c5e98c2939ad025c WatchSource:0}: Error finding container fc111f594610879311fb90d1c6ebb61327f8c1f99aa7e396c5e98c2939ad025c: Status 404 returned error can't find the container with id fc111f594610879311fb90d1c6ebb61327f8c1f99aa7e396c5e98c2939ad025c Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.146264 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.146621 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:49:58.646605582 +0000 UTC m=+133.390553933 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.151516 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.159673 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-x2rhp"] Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.169164 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c5f50cf9-ffda-418c-a80d-9612ce61d429-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-2czqg\" (UID: \"c5f50cf9-ffda-418c-a80d-9612ce61d429\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-2czqg" Jan 22 11:49:58 crc kubenswrapper[5120]: W0122 11:49:58.169273 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc07a3946_e1f2_458f_bc29_15741de2605c.slice/crio-b2d3108e925e8233ca6cc953c1c6d7791d039bcdac43efcbb43a12d771162c73 WatchSource:0}: Error finding container b2d3108e925e8233ca6cc953c1c6d7791d039bcdac43efcbb43a12d771162c73: Status 404 returned error can't find the container with id b2d3108e925e8233ca6cc953c1c6d7791d039bcdac43efcbb43a12d771162c73 Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.170829 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.191497 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.194292 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62b5ce4a-8844-4e22-8bf1-f1f89efa16f9-config\") pod \"kube-storage-version-migrator-operator-565b79b866-w9nlv\" (UID: \"62b5ce4a-8844-4e22-8bf1-f1f89efa16f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-w9nlv" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.210876 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.220042 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-6q5kp"] Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.232271 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Jan 22 11:49:58 crc kubenswrapper[5120]: W0122 11:49:58.248012 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae478ef7_56ef_496c_b99c_4d952d5617b0.slice/crio-1adb53aebc07578df57b6401d6164d8a7fb8bc50b6b3052e45f4ec3290b24031 WatchSource:0}: Error finding container 1adb53aebc07578df57b6401d6164d8a7fb8bc50b6b3052e45f4ec3290b24031: Status 404 returned error can't find the container with id 1adb53aebc07578df57b6401d6164d8a7fb8bc50b6b3052e45f4ec3290b24031 Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.248149 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.248415 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:49:58.748393287 +0000 UTC m=+133.492341738 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.250836 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.251769 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:49:58.751745819 +0000 UTC m=+133.495694170 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.251850 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.257998 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/62b5ce4a-8844-4e22-8bf1-f1f89efa16f9-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-w9nlv\" (UID: \"62b5ce4a-8844-4e22-8bf1-f1f89efa16f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-w9nlv" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.274442 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.290638 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.291790 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6edfa4a4-fdb6-420f-ba3b-d984c4784817-srv-cert\") pod \"olm-operator-5cdf44d969-x78dg\" (UID: \"6edfa4a4-fdb6-420f-ba3b-d984c4784817\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-x78dg" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.295926 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6edfa4a4-fdb6-420f-ba3b-d984c4784817-profile-collector-cert\") pod \"olm-operator-5cdf44d969-x78dg\" (UID: \"6edfa4a4-fdb6-420f-ba3b-d984c4784817\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-x78dg" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.300439 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/91b3eb8a-7090-484d-ae8f-8bbe990bce4d-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-fscmd\" (UID: \"91b3eb8a-7090-484d-ae8f-8bbe990bce4d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-fscmd" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.305234 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2667e960-0d1a-4c78-97ea-b1852f27ce17-secret-volume\") pod \"collect-profiles-29484705-g489w\" (UID: \"2667e960-0d1a-4c78-97ea-b1852f27ce17\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-g489w" Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.312307 5120 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.312421 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b273aff-e733-49a9-a191-88b0380500eb-apiservice-cert podName:7b273aff-e733-49a9-a191-88b0380500eb nodeName:}" failed. No retries permitted until 2026-01-22 11:49:58.812390964 +0000 UTC m=+133.556339305 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/7b273aff-e733-49a9-a191-88b0380500eb-apiservice-cert") pod "packageserver-7d4fc7d867-bbphb" (UID: "7b273aff-e733-49a9-a191-88b0380500eb") : failed to sync secret cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.312484 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.312542 5120 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.312573 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/503a8f02-4faa-4c71-a07b-e5cf7e21fd01-cert podName:503a8f02-4faa-4c71-a07b-e5cf7e21fd01 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:58.812566139 +0000 UTC m=+133.556514480 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/503a8f02-4faa-4c71-a07b-e5cf7e21fd01-cert") pod "ingress-canary-8wqc7" (UID: "503a8f02-4faa-4c71-a07b-e5cf7e21fd01") : failed to sync secret cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.313091 5120 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.313131 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52bf18ab-85c0-49e5-8b9d-9cb67ec54297-package-server-manager-serving-cert podName:52bf18ab-85c0-49e5-8b9d-9cb67ec54297 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:58.813122133 +0000 UTC m=+133.557070474 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/52bf18ab-85c0-49e5-8b9d-9cb67ec54297-package-server-manager-serving-cert") pod "package-server-manager-77f986bd66-9hjpw" (UID: "52bf18ab-85c0-49e5-8b9d-9cb67ec54297") : failed to sync secret cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.314425 5120 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.314465 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b273aff-e733-49a9-a191-88b0380500eb-webhook-cert podName:7b273aff-e733-49a9-a191-88b0380500eb nodeName:}" failed. No retries permitted until 2026-01-22 11:49:58.814455276 +0000 UTC m=+133.558403617 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/7b273aff-e733-49a9-a191-88b0380500eb-webhook-cert") pod "packageserver-7d4fc7d867-bbphb" (UID: "7b273aff-e733-49a9-a191-88b0380500eb") : failed to sync secret cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.314484 5120 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.314508 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e18ea538-05bd-4b11-b4ac-8cb8c0c9aef7-config podName:e18ea538-05bd-4b11-b4ac-8cb8c0c9aef7 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:58.814503087 +0000 UTC m=+133.558451428 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e18ea538-05bd-4b11-b4ac-8cb8c0c9aef7-config") pod "service-ca-operator-5b9c976747-7ghwq" (UID: "e18ea538-05bd-4b11-b4ac-8cb8c0c9aef7") : failed to sync configmap cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.314544 5120 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.314567 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/17d1692e-e64c-415e-98c6-fc0e5c799fe0-marketplace-trusted-ca podName:17d1692e-e64c-415e-98c6-fc0e5c799fe0 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:58.814561528 +0000 UTC m=+133.558509869 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/17d1692e-e64c-415e-98c6-fc0e5c799fe0-marketplace-trusted-ca") pod "marketplace-operator-547dbd544d-dpf6p" (UID: "17d1692e-e64c-415e-98c6-fc0e5c799fe0") : failed to sync configmap cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.314988 5120 configmap.go:193] Couldn't get configMap openshift-operator-lifecycle-manager/collect-profiles-config: failed to sync configmap cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.315022 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2667e960-0d1a-4c78-97ea-b1852f27ce17-config-volume podName:2667e960-0d1a-4c78-97ea-b1852f27ce17 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:58.815013619 +0000 UTC m=+133.558961960 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2667e960-0d1a-4c78-97ea-b1852f27ce17-config-volume") pod "collect-profiles-29484705-g489w" (UID: "2667e960-0d1a-4c78-97ea-b1852f27ce17") : failed to sync configmap cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.315800 5120 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: failed to sync secret cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.315985 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2380d23f-8320-4c77-9936-215ff48a32c8-metrics-tls podName:2380d23f-8320-4c77-9936-215ff48a32c8 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:58.815922371 +0000 UTC m=+133.559870882 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2380d23f-8320-4c77-9936-215ff48a32c8-metrics-tls") pod "dns-default-d4ftw" (UID: "2380d23f-8320-4c77-9936-215ff48a32c8") : failed to sync secret cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.316036 5120 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: failed to sync configmap cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.316073 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d92ccf27-d679-4304-98b0-a6e74c7ffda2-signing-cabundle podName:d92ccf27-d679-4304-98b0-a6e74c7ffda2 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:58.816063805 +0000 UTC m=+133.560012326 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/d92ccf27-d679-4304-98b0-a6e74c7ffda2-signing-cabundle") pod "service-ca-74545575db-llz79" (UID: "d92ccf27-d679-4304-98b0-a6e74c7ffda2") : failed to sync configmap cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.316085 5120 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: failed to sync secret cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.316109 5120 secret.go:189] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: failed to sync secret cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.316126 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3cc31b0e-b225-470f-870b-f89666eae47b-control-plane-machine-set-operator-tls podName:3cc31b0e-b225-470f-870b-f89666eae47b nodeName:}" failed. No retries permitted until 2026-01-22 11:49:58.816118116 +0000 UTC m=+133.560066457 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/3cc31b0e-b225-470f-870b-f89666eae47b-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-75ffdb6fcd-fhxb8" (UID: "3cc31b0e-b225-470f-870b-f89666eae47b") : failed to sync secret cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.316143 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f7fc5383-db19-483a-afb9-23d3f8065a64-proxy-tls podName:f7fc5383-db19-483a-afb9-23d3f8065a64 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:58.816134746 +0000 UTC m=+133.560083297 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/f7fc5383-db19-483a-afb9-23d3f8065a64-proxy-tls") pod "machine-config-controller-f9cdd68f7-kprrg" (UID: "f7fc5383-db19-483a-afb9-23d3f8065a64") : failed to sync secret cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.316153 5120 secret.go:189] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.316177 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a909382a-a9be-43ea-b525-c382d3d7dac9-node-bootstrap-token podName:a909382a-a9be-43ea-b525-c382d3d7dac9 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:58.816171757 +0000 UTC m=+133.560120098 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/a909382a-a9be-43ea-b525-c382d3d7dac9-node-bootstrap-token") pod "machine-config-server-lfqzp" (UID: "a909382a-a9be-43ea-b525-c382d3d7dac9") : failed to sync secret cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.316201 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-7smqb"] Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.322240 5120 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.322334 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/da2b1465-54c1-4a7d-8cb6-755b28e448b8-webhook-certs podName:da2b1465-54c1-4a7d-8cb6-755b28e448b8 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:58.822310777 +0000 UTC m=+133.566259118 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/da2b1465-54c1-4a7d-8cb6-755b28e448b8-webhook-certs") pod "multus-admission-controller-69db94689b-dp8rm" (UID: "da2b1465-54c1-4a7d-8cb6-755b28e448b8") : failed to sync secret cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.322364 5120 secret.go:189] Couldn't get secret openshift-machine-config-operator/machine-config-server-tls: failed to sync secret cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.322392 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a909382a-a9be-43ea-b525-c382d3d7dac9-certs podName:a909382a-a9be-43ea-b525-c382d3d7dac9 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:58.822383339 +0000 UTC m=+133.566331880 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certs" (UniqueName: "kubernetes.io/secret/a909382a-a9be-43ea-b525-c382d3d7dac9-certs") pod "machine-config-server-lfqzp" (UID: "a909382a-a9be-43ea-b525-c382d3d7dac9") : failed to sync secret cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.322415 5120 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: failed to sync configmap cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.322447 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2380d23f-8320-4c77-9936-215ff48a32c8-config-volume podName:2380d23f-8320-4c77-9936-215ff48a32c8 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:58.82243831 +0000 UTC m=+133.566386861 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2380d23f-8320-4c77-9936-215ff48a32c8-config-volume") pod "dns-default-d4ftw" (UID: "2380d23f-8320-4c77-9936-215ff48a32c8") : failed to sync configmap cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.322554 5120 configmap.go:193] Couldn't get configMap openshift-multus/cni-sysctl-allowlist: failed to sync configmap cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.322586 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/48ce43ae-5f5f-4ae6-91bd-98390a12c650-cni-sysctl-allowlist podName:48ce43ae-5f5f-4ae6-91bd-98390a12c650 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:58.822578035 +0000 UTC m=+133.566526716 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cni-sysctl-allowlist" (UniqueName: "kubernetes.io/configmap/48ce43ae-5f5f-4ae6-91bd-98390a12c650-cni-sysctl-allowlist") pod "cni-sysctl-allowlist-ds-mddkn" (UID: "48ce43ae-5f5f-4ae6-91bd-98390a12c650") : failed to sync configmap cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.322603 5120 secret.go:189] Couldn't get secret openshift-service-ca/signing-key: failed to sync secret cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.322629 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d92ccf27-d679-4304-98b0-a6e74c7ffda2-signing-key podName:d92ccf27-d679-4304-98b0-a6e74c7ffda2 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:58.822619806 +0000 UTC m=+133.566568337 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/d92ccf27-d679-4304-98b0-a6e74c7ffda2-signing-key") pod "service-ca-74545575db-llz79" (UID: "d92ccf27-d679-4304-98b0-a6e74c7ffda2") : failed to sync secret cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.322654 5120 secret.go:189] Couldn't get secret openshift-service-ca-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.322682 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e18ea538-05bd-4b11-b4ac-8cb8c0c9aef7-serving-cert podName:e18ea538-05bd-4b11-b4ac-8cb8c0c9aef7 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:58.822674937 +0000 UTC m=+133.566623468 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e18ea538-05bd-4b11-b4ac-8cb8c0c9aef7-serving-cert") pod "service-ca-operator-5b9c976747-7ghwq" (UID: "e18ea538-05bd-4b11-b4ac-8cb8c0c9aef7") : failed to sync secret cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.322716 5120 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: failed to sync secret cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.322748 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/17d1692e-e64c-415e-98c6-fc0e5c799fe0-marketplace-operator-metrics podName:17d1692e-e64c-415e-98c6-fc0e5c799fe0 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:58.822739649 +0000 UTC m=+133.566688000 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/17d1692e-e64c-415e-98c6-fc0e5c799fe0-marketplace-operator-metrics") pod "marketplace-operator-547dbd544d-dpf6p" (UID: "17d1692e-e64c-415e-98c6-fc0e5c799fe0") : failed to sync secret cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.322767 5120 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.322796 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/91b3eb8a-7090-484d-ae8f-8bbe990bce4d-srv-cert podName:91b3eb8a-7090-484d-ae8f-8bbe990bce4d nodeName:}" failed. No retries permitted until 2026-01-22 11:49:58.822789 +0000 UTC m=+133.566737521 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/91b3eb8a-7090-484d-ae8f-8bbe990bce4d-srv-cert") pod "catalog-operator-75ff9f647d-fscmd" (UID: "91b3eb8a-7090-484d-ae8f-8bbe990bce4d") : failed to sync secret cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.332496 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.351740 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.354088 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.354495 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:49:58.854467545 +0000 UTC m=+133.598415886 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.354780 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.355148 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:49:58.855135442 +0000 UTC m=+133.599083783 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.383322 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.391346 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.410109 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.430767 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.456977 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.457326 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:49:58.957301606 +0000 UTC m=+133.701249937 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.457762 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.457977 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.458483 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:49:58.958465674 +0000 UTC m=+133.702414015 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.470542 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.491461 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.511142 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.531340 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.551921 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.561069 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.561306 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:49:59.061281823 +0000 UTC m=+133.805230164 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.561715 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.562450 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:49:59.062433712 +0000 UTC m=+133.806382053 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.570731 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.590118 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.611635 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.631245 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.651005 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.663778 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.664813 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:49:59.16479702 +0000 UTC m=+133.908745361 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.671276 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.690985 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.710643 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.734469 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.751321 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.753647 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-x2rhp" event={"ID":"dfeef834-363c-4dff-a170-acd203607c65","Type":"ContainerStarted","Data":"dd8eace28cff86a1b5496de821e5744b107cd43f9a01079db5e4df31ce5d6895"} Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.753681 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-x2rhp" event={"ID":"dfeef834-363c-4dff-a170-acd203607c65","Type":"ContainerStarted","Data":"d55d4ebbbbf7c389d9c0dd05f0fb2c775150191738bcc3210391f85f462ace3f"} Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.753691 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-x2rhp" event={"ID":"dfeef834-363c-4dff-a170-acd203607c65","Type":"ContainerStarted","Data":"7e2348274672d48c92c39196e7e9a5af45bc6c0506c6cf5cb0e605cb31232ff2"} Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.755570 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-6q5kp" event={"ID":"ae478ef7-56ef-496c-b99c-4d952d5617b0","Type":"ContainerStarted","Data":"c473ccb128a241b291a7ddb1089097c227250278ca512ecd15cd4815e9a53b01"} Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.755692 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-6q5kp" event={"ID":"ae478ef7-56ef-496c-b99c-4d952d5617b0","Type":"ContainerStarted","Data":"1adb53aebc07578df57b6401d6164d8a7fb8bc50b6b3052e45f4ec3290b24031"} Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.755994 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console-operator/console-operator-67c89758df-6q5kp" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.757113 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-mngf2" event={"ID":"ba096274-efe0-462b-9a53-89e321166944","Type":"ContainerStarted","Data":"dba09e04c0563f201b249c43f74da69960d96e49567ac521c4bb56d4526fe03e"} Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.757170 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-mngf2" event={"ID":"ba096274-efe0-462b-9a53-89e321166944","Type":"ContainerStarted","Data":"fc111f594610879311fb90d1c6ebb61327f8c1f99aa7e396c5e98c2939ad025c"} Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.757547 5120 patch_prober.go:28] interesting pod/console-operator-67c89758df-6q5kp container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.757608 5120 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-6q5kp" podUID="ae478ef7-56ef-496c-b99c-4d952d5617b0" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.758519 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-xw8v9" event={"ID":"007c14e3-9fa4-44aa-8d05-a57c4dc222a1","Type":"ContainerStarted","Data":"0d2967cf10b1c44b4095ca653bbf386f8d585bd4d3078507706744e938981761"} Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.758555 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-xw8v9" event={"ID":"007c14e3-9fa4-44aa-8d05-a57c4dc222a1","Type":"ContainerStarted","Data":"b06d71ff154da6cdba043abe6374515e955691a895c872e8885cdaf9984417d0"} Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.759232 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-xw8v9" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.762835 5120 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-xw8v9 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.762969 5120 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-xw8v9" podUID="007c14e3-9fa4-44aa-8d05-a57c4dc222a1" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.763762 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-7smqb" event={"ID":"eb2cf2b6-ca7b-4f75-ad62-7bb5e85aeea9","Type":"ContainerStarted","Data":"ca04f4a8424c009f0b5737addb245fb47c68c1783cf20d8cb4bda69cdfb35adf"} Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.763902 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-7smqb" event={"ID":"eb2cf2b6-ca7b-4f75-ad62-7bb5e85aeea9","Type":"ContainerStarted","Data":"149cc2255fd754dd34cb173207f138a4474b1c8f1b9e6893fdd2d69e3a0ba5c1"} Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.764030 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-7smqb" event={"ID":"eb2cf2b6-ca7b-4f75-ad62-7bb5e85aeea9","Type":"ContainerStarted","Data":"e629eb6fff86a7ada5fe848ea1e2de6ee63c79dee6b4bccd40b363aa7c4e4435"} Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.766045 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.766284 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-ll2j2" event={"ID":"e2d50ff8-e389-4ca8-8a4f-6987db07ea3b","Type":"ContainerStarted","Data":"de7628be39dffdcd6efbafc8c4d9386bd98645efcf19aa6bd627b796e8b44088"} Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.766411 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-ll2j2" event={"ID":"e2d50ff8-e389-4ca8-8a4f-6987db07ea3b","Type":"ContainerStarted","Data":"83bf2b2c087874c8b93a7989b3e650319643ff762a5db8cac16f527553206986"} Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.766492 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-ll2j2" event={"ID":"e2d50ff8-e389-4ca8-8a4f-6987db07ea3b","Type":"ContainerStarted","Data":"9501ed0b60273f6fdf8c1d12900a468e69546af222ea38f8c171b08ca38279f5"} Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.766358 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:49:59.266346269 +0000 UTC m=+134.010294610 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.768303 5120 generic.go:358] "Generic (PLEG): container finished" podID="c07a3946-e1f2-458f-bc29-15741de2605c" containerID="82d3cdaaa62f04c2d1c1cbddb8cc1cd9d718790e43e9eaf4f4bd31f2260a467a" exitCode=0 Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.768448 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-tfhpf" event={"ID":"c07a3946-e1f2-458f-bc29-15741de2605c","Type":"ContainerDied","Data":"82d3cdaaa62f04c2d1c1cbddb8cc1cd9d718790e43e9eaf4f4bd31f2260a467a"} Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.768537 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-tfhpf" event={"ID":"c07a3946-e1f2-458f-bc29-15741de2605c","Type":"ContainerStarted","Data":"b2d3108e925e8233ca6cc953c1c6d7791d039bcdac43efcbb43a12d771162c73"} Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.770261 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fzgnb" event={"ID":"e36a1cae-0915-45b1-abf9-2f44c78f3306","Type":"ContainerStarted","Data":"5e977e10172d967f197ee04cf8a94ca2d54059ca15c4d92be05592d36a35cddb"} Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.770301 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fzgnb" event={"ID":"e36a1cae-0915-45b1-abf9-2f44c78f3306","Type":"ContainerStarted","Data":"2d59b64b6f345357f2908b0217e759f74cb8c56e84767dbef6ac59043f972d83"} Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.770536 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.770989 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fzgnb" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.772698 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-bg8p2" event={"ID":"0b427d7e-8e8a-4486-831a-aa6cc98f1b39","Type":"ContainerStarted","Data":"21082b3c6a22b6745a3993ab12e4b693bebdb61f07bc987c940b0fa236b6c615"} Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.772740 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-bg8p2" event={"ID":"0b427d7e-8e8a-4486-831a-aa6cc98f1b39","Type":"ContainerStarted","Data":"f50383757d994159e1aa2817319aba1bd5941fa2f72330ed06fb0eb17d2d34a0"} Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.774714 5120 generic.go:358] "Generic (PLEG): container finished" podID="fd113660-b734-4d86-be8d-b28c5e9a328f" containerID="d911ab14e3f566f46e176f13f90317966dca1a8709ed763ed6fe76b67c93e320" exitCode=0 Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.774764 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" event={"ID":"fd113660-b734-4d86-be8d-b28c5e9a328f","Type":"ContainerDied","Data":"d911ab14e3f566f46e176f13f90317966dca1a8709ed763ed6fe76b67c93e320"} Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.774782 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" event={"ID":"fd113660-b734-4d86-be8d-b28c5e9a328f","Type":"ContainerStarted","Data":"8929fdcecaae454196a9a31857dbede8e413a5afe9ac0bd3b4f3d7558cd1837b"} Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.790386 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.811081 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.834537 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.851795 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.867947 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.868106 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:49:59.368080251 +0000 UTC m=+134.112028592 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.868939 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2380d23f-8320-4c77-9936-215ff48a32c8-metrics-tls\") pod \"dns-default-d4ftw\" (UID: \"2380d23f-8320-4c77-9936-215ff48a32c8\") " pod="openshift-dns/dns-default-d4ftw" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.872001 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3cc31b0e-b225-470f-870b-f89666eae47b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-fhxb8\" (UID: \"3cc31b0e-b225-470f-870b-f89666eae47b\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-fhxb8" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.872106 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d92ccf27-d679-4304-98b0-a6e74c7ffda2-signing-cabundle\") pod \"service-ca-74545575db-llz79\" (UID: \"d92ccf27-d679-4304-98b0-a6e74c7ffda2\") " pod="openshift-service-ca/service-ca-74545575db-llz79" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.872222 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d92ccf27-d679-4304-98b0-a6e74c7ffda2-signing-key\") pod \"service-ca-74545575db-llz79\" (UID: \"d92ccf27-d679-4304-98b0-a6e74c7ffda2\") " pod="openshift-service-ca/service-ca-74545575db-llz79" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.872264 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e18ea538-05bd-4b11-b4ac-8cb8c0c9aef7-serving-cert\") pod \"service-ca-operator-5b9c976747-7ghwq\" (UID: \"e18ea538-05bd-4b11-b4ac-8cb8c0c9aef7\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-7ghwq" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.872418 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/48ce43ae-5f5f-4ae6-91bd-98390a12c650-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-mddkn\" (UID: \"48ce43ae-5f5f-4ae6-91bd-98390a12c650\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mddkn" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.872468 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/91b3eb8a-7090-484d-ae8f-8bbe990bce4d-srv-cert\") pod \"catalog-operator-75ff9f647d-fscmd\" (UID: \"91b3eb8a-7090-484d-ae8f-8bbe990bce4d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-fscmd" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.872548 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/a909382a-a9be-43ea-b525-c382d3d7dac9-certs\") pod \"machine-config-server-lfqzp\" (UID: \"a909382a-a9be-43ea-b525-c382d3d7dac9\") " pod="openshift-machine-config-operator/machine-config-server-lfqzp" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.872575 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/da2b1465-54c1-4a7d-8cb6-755b28e448b8-webhook-certs\") pod \"multus-admission-controller-69db94689b-dp8rm\" (UID: \"da2b1465-54c1-4a7d-8cb6-755b28e448b8\") " pod="openshift-multus/multus-admission-controller-69db94689b-dp8rm" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.872725 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2380d23f-8320-4c77-9936-215ff48a32c8-config-volume\") pod \"dns-default-d4ftw\" (UID: \"2380d23f-8320-4c77-9936-215ff48a32c8\") " pod="openshift-dns/dns-default-d4ftw" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.872767 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/17d1692e-e64c-415e-98c6-fc0e5c799fe0-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-dpf6p\" (UID: \"17d1692e-e64c-415e-98c6-fc0e5c799fe0\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-dpf6p" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.873161 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7b273aff-e733-49a9-a191-88b0380500eb-apiservice-cert\") pod \"packageserver-7d4fc7d867-bbphb\" (UID: \"7b273aff-e733-49a9-a191-88b0380500eb\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-bbphb" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.873208 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.873235 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/503a8f02-4faa-4c71-a07b-e5cf7e21fd01-cert\") pod \"ingress-canary-8wqc7\" (UID: \"503a8f02-4faa-4c71-a07b-e5cf7e21fd01\") " pod="openshift-ingress-canary/ingress-canary-8wqc7" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.873270 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/52bf18ab-85c0-49e5-8b9d-9cb67ec54297-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-9hjpw\" (UID: \"52bf18ab-85c0-49e5-8b9d-9cb67ec54297\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-9hjpw" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.873328 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e18ea538-05bd-4b11-b4ac-8cb8c0c9aef7-config\") pod \"service-ca-operator-5b9c976747-7ghwq\" (UID: \"e18ea538-05bd-4b11-b4ac-8cb8c0c9aef7\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-7ghwq" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.873358 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/17d1692e-e64c-415e-98c6-fc0e5c799fe0-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-dpf6p\" (UID: \"17d1692e-e64c-415e-98c6-fc0e5c799fe0\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-dpf6p" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.873480 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/a909382a-a9be-43ea-b525-c382d3d7dac9-node-bootstrap-token\") pod \"machine-config-server-lfqzp\" (UID: \"a909382a-a9be-43ea-b525-c382d3d7dac9\") " pod="openshift-machine-config-operator/machine-config-server-lfqzp" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.873526 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7b273aff-e733-49a9-a191-88b0380500eb-webhook-cert\") pod \"packageserver-7d4fc7d867-bbphb\" (UID: \"7b273aff-e733-49a9-a191-88b0380500eb\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-bbphb" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.873574 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2667e960-0d1a-4c78-97ea-b1852f27ce17-config-volume\") pod \"collect-profiles-29484705-g489w\" (UID: \"2667e960-0d1a-4c78-97ea-b1852f27ce17\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-g489w" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.873686 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f7fc5383-db19-483a-afb9-23d3f8065a64-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-kprrg\" (UID: \"f7fc5383-db19-483a-afb9-23d3f8065a64\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-kprrg" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.874628 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.882134 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e18ea538-05bd-4b11-b4ac-8cb8c0c9aef7-config\") pod \"service-ca-operator-5b9c976747-7ghwq\" (UID: \"e18ea538-05bd-4b11-b4ac-8cb8c0c9aef7\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-7ghwq" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.884586 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d92ccf27-d679-4304-98b0-a6e74c7ffda2-signing-cabundle\") pod \"service-ca-74545575db-llz79\" (UID: \"d92ccf27-d679-4304-98b0-a6e74c7ffda2\") " pod="openshift-service-ca/service-ca-74545575db-llz79" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.885401 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/17d1692e-e64c-415e-98c6-fc0e5c799fe0-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-dpf6p\" (UID: \"17d1692e-e64c-415e-98c6-fc0e5c799fe0\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-dpf6p" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.888057 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/52bf18ab-85c0-49e5-8b9d-9cb67ec54297-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-9hjpw\" (UID: \"52bf18ab-85c0-49e5-8b9d-9cb67ec54297\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-9hjpw" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.889316 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e18ea538-05bd-4b11-b4ac-8cb8c0c9aef7-serving-cert\") pod \"service-ca-operator-5b9c976747-7ghwq\" (UID: \"e18ea538-05bd-4b11-b4ac-8cb8c0c9aef7\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-7ghwq" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.890607 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7b273aff-e733-49a9-a191-88b0380500eb-apiservice-cert\") pod \"packageserver-7d4fc7d867-bbphb\" (UID: \"7b273aff-e733-49a9-a191-88b0380500eb\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-bbphb" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.891260 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7b273aff-e733-49a9-a191-88b0380500eb-webhook-cert\") pod \"packageserver-7d4fc7d867-bbphb\" (UID: \"7b273aff-e733-49a9-a191-88b0380500eb\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-bbphb" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.891361 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3cc31b0e-b225-470f-870b-f89666eae47b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-fhxb8\" (UID: \"3cc31b0e-b225-470f-870b-f89666eae47b\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-fhxb8" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.892844 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-sysctl-allowlist\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.894467 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d92ccf27-d679-4304-98b0-a6e74c7ffda2-signing-key\") pod \"service-ca-74545575db-llz79\" (UID: \"d92ccf27-d679-4304-98b0-a6e74c7ffda2\") " pod="openshift-service-ca/service-ca-74545575db-llz79" Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.895354 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:49:59.39533884 +0000 UTC m=+134.139287181 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.896364 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2667e960-0d1a-4c78-97ea-b1852f27ce17-config-volume\") pod \"collect-profiles-29484705-g489w\" (UID: \"2667e960-0d1a-4c78-97ea-b1852f27ce17\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-g489w" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.901611 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f7fc5383-db19-483a-afb9-23d3f8065a64-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-kprrg\" (UID: \"f7fc5383-db19-483a-afb9-23d3f8065a64\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-kprrg" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.901711 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/48ce43ae-5f5f-4ae6-91bd-98390a12c650-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-mddkn\" (UID: \"48ce43ae-5f5f-4ae6-91bd-98390a12c650\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mddkn" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.903825 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/91b3eb8a-7090-484d-ae8f-8bbe990bce4d-srv-cert\") pod \"catalog-operator-75ff9f647d-fscmd\" (UID: \"91b3eb8a-7090-484d-ae8f-8bbe990bce4d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-fscmd" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.905710 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/da2b1465-54c1-4a7d-8cb6-755b28e448b8-webhook-certs\") pod \"multus-admission-controller-69db94689b-dp8rm\" (UID: \"da2b1465-54c1-4a7d-8cb6-755b28e448b8\") " pod="openshift-multus/multus-admission-controller-69db94689b-dp8rm" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.911291 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.917997 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/17d1692e-e64c-415e-98c6-fc0e5c799fe0-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-dpf6p\" (UID: \"17d1692e-e64c-415e-98c6-fc0e5c799fe0\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-dpf6p" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.941631 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.941711 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/503a8f02-4faa-4c71-a07b-e5cf7e21fd01-cert\") pod \"ingress-canary-8wqc7\" (UID: \"503a8f02-4faa-4c71-a07b-e5cf7e21fd01\") " pod="openshift-ingress-canary/ingress-canary-8wqc7" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.951255 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.970123 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.975978 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.976424 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:49:59.476406566 +0000 UTC m=+134.220354907 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:58.999575 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.011613 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.033172 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.069040 5120 request.go:752] "Waited before sending request" delay="1.968414834s" reason="client-side throttling, not priority and fairness" verb="POST" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/serviceaccounts/openshift-config-operator/token" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.078164 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:59 crc kubenswrapper[5120]: E0122 11:49:59.078586 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:49:59.57857469 +0000 UTC m=+134.322523031 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.098798 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9g2hf\" (UniqueName: \"kubernetes.io/projected/42d89f76-66b8-4ffa-a63e-13582811b819-kube-api-access-9g2hf\") pod \"openshift-apiserver-operator-846cbfc458-6q7wl\" (UID: \"42d89f76-66b8-4ffa-a63e-13582811b819\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-6q7wl" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.113560 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ld95q\" (UniqueName: \"kubernetes.io/projected/ea345128-daaf-464a-b774-8f8cf4c34aa5-kube-api-access-ld95q\") pod \"openshift-config-operator-5777786469-rkbh2\" (UID: \"ea345128-daaf-464a-b774-8f8cf4c34aa5\") " pod="openshift-config-operator/openshift-config-operator-5777786469-rkbh2" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.128326 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rdkp\" (UniqueName: \"kubernetes.io/projected/699a5d41-d0b5-4d88-9448-4b3bad2cc424-kube-api-access-5rdkp\") pod \"dns-operator-799b87ffcd-p98m2\" (UID: \"699a5d41-d0b5-4d88-9448-4b3bad2cc424\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-p98m2" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.130021 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.133189 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2380d23f-8320-4c77-9936-215ff48a32c8-config-volume\") pod \"dns-default-d4ftw\" (UID: \"2380d23f-8320-4c77-9936-215ff48a32c8\") " pod="openshift-dns/dns-default-d4ftw" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.142183 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kw26\" (UniqueName: \"kubernetes.io/projected/f65e3321-2af5-4ab7-8765-36af9f3ecc9e-kube-api-access-2kw26\") pod \"etcd-operator-69b85846b6-r4999\" (UID: \"f65e3321-2af5-4ab7-8765-36af9f3ecc9e\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r4999" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.145559 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fzgnb" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.150388 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.172892 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.179765 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:49:59 crc kubenswrapper[5120]: E0122 11:49:59.180355 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:49:59.680338933 +0000 UTC m=+134.424287274 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.192202 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-r4999" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.192904 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2380d23f-8320-4c77-9936-215ff48a32c8-metrics-tls\") pod \"dns-default-d4ftw\" (UID: \"2380d23f-8320-4c77-9936-215ff48a32c8\") " pod="openshift-dns/dns-default-d4ftw" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.213412 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-26kbp\" (UniqueName: \"kubernetes.io/projected/9af7812b-a785-44ec-a8eb-eb72b9958b01-kube-api-access-26kbp\") pod \"authentication-operator-7f5c659b84-v89hk\" (UID: \"9af7812b-a785-44ec-a8eb-eb72b9958b01\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-v89hk" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.231859 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgrjt\" (UniqueName: \"kubernetes.io/projected/bebd6777-9b90-4b62-a3a9-360290cb39a9-kube-api-access-dgrjt\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.249808 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfb4z\" (UniqueName: \"kubernetes.io/projected/a1372d1c-9557-4da9-b571-ea78602f491f-kube-api-access-mfb4z\") pod \"downloads-747b44746d-btnnz\" (UID: \"a1372d1c-9557-4da9-b571-ea78602f491f\") " pod="openshift-console/downloads-747b44746d-btnnz" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.250660 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.270750 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.275539 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/a909382a-a9be-43ea-b525-c382d3d7dac9-node-bootstrap-token\") pod \"machine-config-server-lfqzp\" (UID: \"a909382a-a9be-43ea-b525-c382d3d7dac9\") " pod="openshift-machine-config-operator/machine-config-server-lfqzp" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.282743 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:59 crc kubenswrapper[5120]: E0122 11:49:59.285520 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:49:59.78549927 +0000 UTC m=+134.529447611 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.290708 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.309861 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/a909382a-a9be-43ea-b525-c382d3d7dac9-certs\") pod \"machine-config-server-lfqzp\" (UID: \"a909382a-a9be-43ea-b525-c382d3d7dac9\") " pod="openshift-machine-config-operator/machine-config-server-lfqzp" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.341287 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e16334d5-3fa8-48de-a8e0-af1f9fa51926-bound-sa-token\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.370343 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-rkbh2" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.370944 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-btnnz" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.384683 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-p98m2" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.386867 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:49:59 crc kubenswrapper[5120]: E0122 11:49:59.388037 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:49:59.888016243 +0000 UTC m=+134.631964584 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.396373 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-6q7wl" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.397228 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mg7w\" (UniqueName: \"kubernetes.io/projected/e16334d5-3fa8-48de-a8e0-af1f9fa51926-kube-api-access-5mg7w\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.418466 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bdf4dfdb-f473-480e-ae44-570e99cf695f-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-7nx8w\" (UID: \"bdf4dfdb-f473-480e-ae44-570e99cf695f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-7nx8w" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.437394 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kl2wm\" (UniqueName: \"kubernetes.io/projected/2667e960-0d1a-4c78-97ea-b1852f27ce17-kube-api-access-kl2wm\") pod \"collect-profiles-29484705-g489w\" (UID: \"2667e960-0d1a-4c78-97ea-b1852f27ce17\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-g489w" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.471610 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hdgb\" (UniqueName: \"kubernetes.io/projected/62b5ce4a-8844-4e22-8bf1-f1f89efa16f9-kube-api-access-2hdgb\") pod \"kube-storage-version-migrator-operator-565b79b866-w9nlv\" (UID: \"62b5ce4a-8844-4e22-8bf1-f1f89efa16f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-w9nlv" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.478256 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-7nx8w" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.479053 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2rv6\" (UniqueName: \"kubernetes.io/projected/f7fc5383-db19-483a-afb9-23d3f8065a64-kube-api-access-n2rv6\") pod \"machine-config-controller-f9cdd68f7-kprrg\" (UID: \"f7fc5383-db19-483a-afb9-23d3f8065a64\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-kprrg" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.479544 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gw44v\" (UniqueName: \"kubernetes.io/projected/3cc31b0e-b225-470f-870b-f89666eae47b-kube-api-access-gw44v\") pod \"control-plane-machine-set-operator-75ffdb6fcd-fhxb8\" (UID: \"3cc31b0e-b225-470f-870b-f89666eae47b\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-fhxb8" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.479899 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-v89hk" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.480136 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.489447 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:59 crc kubenswrapper[5120]: E0122 11:49:59.490108 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:49:59.990095663 +0000 UTC m=+134.734044004 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.499552 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7qm6\" (UniqueName: \"kubernetes.io/projected/da2b1465-54c1-4a7d-8cb6-755b28e448b8-kube-api-access-s7qm6\") pod \"multus-admission-controller-69db94689b-dp8rm\" (UID: \"da2b1465-54c1-4a7d-8cb6-755b28e448b8\") " pod="openshift-multus/multus-admission-controller-69db94689b-dp8rm" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.535386 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fbaf6c98-c3db-488e-878a-d0b1b9779ea2-kube-api-access\") pod \"kube-apiserver-operator-575994946d-j9r4l\" (UID: \"fbaf6c98-c3db-488e-878a-d0b1b9779ea2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-j9r4l" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.553991 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcskr\" (UniqueName: \"kubernetes.io/projected/e18ea538-05bd-4b11-b4ac-8cb8c0c9aef7-kube-api-access-jcskr\") pod \"service-ca-operator-5b9c976747-7ghwq\" (UID: \"e18ea538-05bd-4b11-b4ac-8cb8c0c9aef7\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-7ghwq" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.554888 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-w9nlv" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.559484 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-kprrg" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.576398 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqccv\" (UniqueName: \"kubernetes.io/projected/061945e1-c5cb-4451-94ff-0fd4a53b4901-kube-api-access-pqccv\") pod \"ingress-operator-6b9cb4dbcf-nzfjl\" (UID: \"061945e1-c5cb-4451-94ff-0fd4a53b4901\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-nzfjl" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.582931 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-g489w" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.582931 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9a52cc8b-fb68-4b1d-b91d-576f5ff59968-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-8p7x7\" (UID: \"9a52cc8b-fb68-4b1d-b91d-576f5ff59968\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-8p7x7" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.586826 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rc7kn\" (UniqueName: \"kubernetes.io/projected/efec95f9-a526-41f9-bd7c-0d1bd2505eda-kube-api-access-rc7kn\") pod \"console-64d44f6ddf-7q8jr\" (UID: \"efec95f9-a526-41f9-bd7c-0d1bd2505eda\") " pod="openshift-console/console-64d44f6ddf-7q8jr" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.590549 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-dp8rm" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.591412 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:49:59 crc kubenswrapper[5120]: E0122 11:49:59.591678 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:00.091651742 +0000 UTC m=+134.835600223 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.592503 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:59 crc kubenswrapper[5120]: E0122 11:49:59.592832 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:00.092815841 +0000 UTC m=+134.836764182 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.612812 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljmv2\" (UniqueName: \"kubernetes.io/projected/52bf18ab-85c0-49e5-8b9d-9cb67ec54297-kube-api-access-ljmv2\") pod \"package-server-manager-77f986bd66-9hjpw\" (UID: \"52bf18ab-85c0-49e5-8b9d-9cb67ec54297\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-9hjpw" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.613070 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-fhxb8" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.615971 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-9hjpw" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.621484 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fdm8\" (UniqueName: \"kubernetes.io/projected/17d1692e-e64c-415e-98c6-fc0e5c799fe0-kube-api-access-2fdm8\") pod \"marketplace-operator-547dbd544d-dpf6p\" (UID: \"17d1692e-e64c-415e-98c6-fc0e5c799fe0\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-dpf6p" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.624123 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-7ghwq" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.643108 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9x68s\" (UniqueName: \"kubernetes.io/projected/d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2-kube-api-access-9x68s\") pod \"csi-hostpathplugin-lsqq6\" (UID: \"d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2\") " pod="hostpath-provisioner/csi-hostpathplugin-lsqq6" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.655564 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdjp5\" (UniqueName: \"kubernetes.io/projected/48ce43ae-5f5f-4ae6-91bd-98390a12c650-kube-api-access-mdjp5\") pod \"cni-sysctl-allowlist-ds-mddkn\" (UID: \"48ce43ae-5f5f-4ae6-91bd-98390a12c650\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mddkn" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.668516 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-lsqq6" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.685692 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxcrb\" (UniqueName: \"kubernetes.io/projected/6edfa4a4-fdb6-420f-ba3b-d984c4784817-kube-api-access-hxcrb\") pod \"olm-operator-5cdf44d969-x78dg\" (UID: \"6edfa4a4-fdb6-420f-ba3b-d984c4784817\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-x78dg" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.695382 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/061945e1-c5cb-4451-94ff-0fd4a53b4901-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-nzfjl\" (UID: \"061945e1-c5cb-4451-94ff-0fd4a53b4901\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-nzfjl" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.701665 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:49:59 crc kubenswrapper[5120]: E0122 11:49:59.702071 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:00.202053548 +0000 UTC m=+134.946001889 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.729336 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6g7g\" (UniqueName: \"kubernetes.io/projected/7b273aff-e733-49a9-a191-88b0380500eb-kube-api-access-k6g7g\") pod \"packageserver-7d4fc7d867-bbphb\" (UID: \"7b273aff-e733-49a9-a191-88b0380500eb\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-bbphb" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.742698 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2cpr\" (UniqueName: \"kubernetes.io/projected/d92ccf27-d679-4304-98b0-a6e74c7ffda2-kube-api-access-c2cpr\") pod \"service-ca-74545575db-llz79\" (UID: \"d92ccf27-d679-4304-98b0-a6e74c7ffda2\") " pod="openshift-service-ca/service-ca-74545575db-llz79" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.770284 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-r4999"] Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.775577 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bszmq\" (UniqueName: \"kubernetes.io/projected/91b3eb8a-7090-484d-ae8f-8bbe990bce4d-kube-api-access-bszmq\") pod \"catalog-operator-75ff9f647d-fscmd\" (UID: \"91b3eb8a-7090-484d-ae8f-8bbe990bce4d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-fscmd" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.782575 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-nzfjl" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.792671 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5k8gv\" (UniqueName: \"kubernetes.io/projected/d245a73a-a6cb-488c-91aa-8b3020511b47-kube-api-access-5k8gv\") pod \"migrator-866fcbc849-dc6zt\" (UID: \"d245a73a-a6cb-488c-91aa-8b3020511b47\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-dc6zt" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.800202 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-8p7x7" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.802937 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:59 crc kubenswrapper[5120]: E0122 11:49:59.803278 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:00.303264712 +0000 UTC m=+135.047213043 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.806192 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rkxp\" (UniqueName: \"kubernetes.io/projected/a909382a-a9be-43ea-b525-c382d3d7dac9-kube-api-access-8rkxp\") pod \"machine-config-server-lfqzp\" (UID: \"a909382a-a9be-43ea-b525-c382d3d7dac9\") " pod="openshift-machine-config-operator/machine-config-server-lfqzp" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.812147 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtrk4\" (UniqueName: \"kubernetes.io/projected/5e1bcfb8-8fae-4947-a078-c38b69596998-kube-api-access-rtrk4\") pod \"router-default-68cf44c8b8-7x2rm\" (UID: \"5e1bcfb8-8fae-4947-a078-c38b69596998\") " pod="openshift-ingress/router-default-68cf44c8b8-7x2rm" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.814258 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-7q8jr" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.814750 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-j9r4l" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.823467 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-dc6zt" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.832840 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-7x2rm" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.839846 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fp9hf\" (UniqueName: \"kubernetes.io/projected/503a8f02-4faa-4c71-a07b-e5cf7e21fd01-kube-api-access-fp9hf\") pod \"ingress-canary-8wqc7\" (UID: \"503a8f02-4faa-4c71-a07b-e5cf7e21fd01\") " pod="openshift-ingress-canary/ingress-canary-8wqc7" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.854262 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-x78dg" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.855551 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-tfhpf" event={"ID":"c07a3946-e1f2-458f-bc29-15741de2605c","Type":"ContainerStarted","Data":"b58f46ca694e1c09b1d5aa117e6c8335287b9d84fd676c34d6ac18b2a7745319"} Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.857486 5120 ???:1] "http: TLS handshake error from 192.168.126.11:46262: no serving certificate available for the kubelet" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.871326 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-fscmd" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.874863 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-dpf6p" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.875312 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mljlf\" (UniqueName: \"kubernetes.io/projected/2380d23f-8320-4c77-9936-215ff48a32c8-kube-api-access-mljlf\") pod \"dns-default-d4ftw\" (UID: \"2380d23f-8320-4c77-9936-215ff48a32c8\") " pod="openshift-dns/dns-default-d4ftw" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.879816 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhpd9\" (UniqueName: \"kubernetes.io/projected/c5f50cf9-ffda-418c-a80d-9612ce61d429-kube-api-access-dhpd9\") pod \"machine-config-operator-67c9d58cbb-2czqg\" (UID: \"c5f50cf9-ffda-418c-a80d-9612ce61d429\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-2czqg" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.882068 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" event={"ID":"fd113660-b734-4d86-be8d-b28c5e9a328f","Type":"ContainerStarted","Data":"e9d915c14e1cb702ed0ee52af36016ac13bd762c7ead7a4097c2ee644b3c21d3"} Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.898306 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-bbphb" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.903513 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-67c89758df-6q5kp" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.904089 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:49:59 crc kubenswrapper[5120]: E0122 11:49:59.904564 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:00.404544184 +0000 UTC m=+135.148492525 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.933018 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-llz79" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.938548 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-mddkn" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.945404 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-8wqc7" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.949254 5120 ???:1] "http: TLS handshake error from 192.168.126.11:46264: no serving certificate available for the kubelet" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.977279 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-d4ftw" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.988473 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-lfqzp" Jan 22 11:50:00 crc kubenswrapper[5120]: I0122 11:50:00.009272 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:00 crc kubenswrapper[5120]: E0122 11:50:00.014821 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:00.514805493 +0000 UTC m=+135.258753834 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:00 crc kubenswrapper[5120]: I0122 11:50:00.050370 5120 ???:1] "http: TLS handshake error from 192.168.126.11:46280: no serving certificate available for the kubelet" Jan 22 11:50:00 crc kubenswrapper[5120]: W0122 11:50:00.055197 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda909382a_a9be_43ea_b525_c382d3d7dac9.slice/crio-3a3f0a94381e44593949ef9298feb4483d28a29c23231b93966513ecd84ff3fc WatchSource:0}: Error finding container 3a3f0a94381e44593949ef9298feb4483d28a29c23231b93966513ecd84ff3fc: Status 404 returned error can't find the container with id 3a3f0a94381e44593949ef9298feb4483d28a29c23231b93966513ecd84ff3fc Jan 22 11:50:00 crc kubenswrapper[5120]: I0122 11:50:00.110388 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:00 crc kubenswrapper[5120]: E0122 11:50:00.110983 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:00.610946271 +0000 UTC m=+135.354894612 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:00 crc kubenswrapper[5120]: I0122 11:50:00.139247 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-2czqg" Jan 22 11:50:00 crc kubenswrapper[5120]: I0122 11:50:00.163510 5120 ???:1] "http: TLS handshake error from 192.168.126.11:46292: no serving certificate available for the kubelet" Jan 22 11:50:00 crc kubenswrapper[5120]: I0122 11:50:00.213510 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:00 crc kubenswrapper[5120]: E0122 11:50:00.214023 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:00.714007186 +0000 UTC m=+135.457955527 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:00 crc kubenswrapper[5120]: I0122 11:50:00.264238 5120 ???:1] "http: TLS handshake error from 192.168.126.11:46306: no serving certificate available for the kubelet" Jan 22 11:50:00 crc kubenswrapper[5120]: I0122 11:50:00.315553 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:00 crc kubenswrapper[5120]: E0122 11:50:00.315981 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:00.815940234 +0000 UTC m=+135.559888565 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:00 crc kubenswrapper[5120]: I0122 11:50:00.344206 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-xw8v9" Jan 22 11:50:00 crc kubenswrapper[5120]: I0122 11:50:00.363735 5120 ???:1] "http: TLS handshake error from 192.168.126.11:46318: no serving certificate available for the kubelet" Jan 22 11:50:00 crc kubenswrapper[5120]: I0122 11:50:00.401031 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-mngf2" podStartSLOduration=115.401015453 podStartE2EDuration="1m55.401015453s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:00.40085671 +0000 UTC m=+135.144805051" watchObservedRunningTime="2026-01-22 11:50:00.401015453 +0000 UTC m=+135.144963794" Jan 22 11:50:00 crc kubenswrapper[5120]: I0122 11:50:00.417995 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:00 crc kubenswrapper[5120]: E0122 11:50:00.418333 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:00.918320172 +0000 UTC m=+135.662268513 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:00 crc kubenswrapper[5120]: I0122 11:50:00.464591 5120 ???:1] "http: TLS handshake error from 192.168.126.11:46330: no serving certificate available for the kubelet" Jan 22 11:50:00 crc kubenswrapper[5120]: I0122 11:50:00.519751 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:00 crc kubenswrapper[5120]: E0122 11:50:00.520471 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:01.020454095 +0000 UTC m=+135.764402426 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:00 crc kubenswrapper[5120]: I0122 11:50:00.560143 5120 ???:1] "http: TLS handshake error from 192.168.126.11:46332: no serving certificate available for the kubelet" Jan 22 11:50:00 crc kubenswrapper[5120]: I0122 11:50:00.621261 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:00 crc kubenswrapper[5120]: E0122 11:50:00.622474 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:01.122457154 +0000 UTC m=+135.866405495 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:00 crc kubenswrapper[5120]: I0122 11:50:00.724715 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:00 crc kubenswrapper[5120]: E0122 11:50:00.724926 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:01.224909014 +0000 UTC m=+135.968857345 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:00 crc kubenswrapper[5120]: I0122 11:50:00.725036 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:00 crc kubenswrapper[5120]: E0122 11:50:00.725330 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:01.225323705 +0000 UTC m=+135.969272046 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:00 crc kubenswrapper[5120]: I0122 11:50:00.789551 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-8596bd845d-tfhpf" podStartSLOduration=115.789533889 podStartE2EDuration="1m55.789533889s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:00.788601126 +0000 UTC m=+135.532549467" watchObservedRunningTime="2026-01-22 11:50:00.789533889 +0000 UTC m=+135.533482230" Jan 22 11:50:00 crc kubenswrapper[5120]: I0122 11:50:00.828151 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:00 crc kubenswrapper[5120]: E0122 11:50:00.828297 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:01.328276437 +0000 UTC m=+136.072224788 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:00 crc kubenswrapper[5120]: I0122 11:50:00.828374 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:00 crc kubenswrapper[5120]: E0122 11:50:00.828701 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:01.328688776 +0000 UTC m=+136.072637117 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:00 crc kubenswrapper[5120]: I0122 11:50:00.894002 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-mddkn" event={"ID":"48ce43ae-5f5f-4ae6-91bd-98390a12c650","Type":"ContainerStarted","Data":"224c53d4c2e0d2802958ae5a4e8f3773f21300049c7b7357bf9e459ec82f1d55"} Jan 22 11:50:00 crc kubenswrapper[5120]: I0122 11:50:00.895064 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-7x2rm" event={"ID":"5e1bcfb8-8fae-4947-a078-c38b69596998","Type":"ContainerStarted","Data":"18f6f5bb5a596230152d3a29c830aed3a2a10fa9a9599f4fb0775380fc6ab880"} Jan 22 11:50:00 crc kubenswrapper[5120]: I0122 11:50:00.895089 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-7x2rm" event={"ID":"5e1bcfb8-8fae-4947-a078-c38b69596998","Type":"ContainerStarted","Data":"63e21539b78c3caacad2be48bbce7c838a156a1b2407e79bda3f69a577565072"} Jan 22 11:50:00 crc kubenswrapper[5120]: I0122 11:50:00.897225 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" event={"ID":"fd113660-b734-4d86-be8d-b28c5e9a328f","Type":"ContainerStarted","Data":"7b5b6871c35c27b98c915aec1ce5f2c586f492a1a2f065cbc21d34248a426f49"} Jan 22 11:50:00 crc kubenswrapper[5120]: I0122 11:50:00.898977 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-lfqzp" event={"ID":"a909382a-a9be-43ea-b525-c382d3d7dac9","Type":"ContainerStarted","Data":"e3aaddb1a50b992e545ec29f73567401c0118360b105c36c258614e980dd595d"} Jan 22 11:50:00 crc kubenswrapper[5120]: I0122 11:50:00.899028 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-lfqzp" event={"ID":"a909382a-a9be-43ea-b525-c382d3d7dac9","Type":"ContainerStarted","Data":"3a3f0a94381e44593949ef9298feb4483d28a29c23231b93966513ecd84ff3fc"} Jan 22 11:50:00 crc kubenswrapper[5120]: I0122 11:50:00.900930 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-r4999" event={"ID":"f65e3321-2af5-4ab7-8765-36af9f3ecc9e","Type":"ContainerStarted","Data":"d7571b5a6c094e5317490ea9142d0e3f44894b3c88275a7c50d443f18319ed06"} Jan 22 11:50:00 crc kubenswrapper[5120]: I0122 11:50:00.932033 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:00 crc kubenswrapper[5120]: E0122 11:50:00.932150 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:01.43212354 +0000 UTC m=+136.176071881 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:00 crc kubenswrapper[5120]: I0122 11:50:00.932777 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:00 crc kubenswrapper[5120]: E0122 11:50:00.933108 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:01.433099494 +0000 UTC m=+136.177047835 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:00 crc kubenswrapper[5120]: I0122 11:50:00.992107 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-54c688565-ll2j2" podStartSLOduration=116.992089032 podStartE2EDuration="1m56.992089032s" podCreationTimestamp="2026-01-22 11:48:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:00.99076074 +0000 UTC m=+135.734709081" watchObservedRunningTime="2026-01-22 11:50:00.992089032 +0000 UTC m=+135.736037363" Jan 22 11:50:00 crc kubenswrapper[5120]: I0122 11:50:00.993615 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-65b6cccf98-xw8v9" podStartSLOduration=115.993606649 podStartE2EDuration="1m55.993606649s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:00.96057614 +0000 UTC m=+135.704524481" watchObservedRunningTime="2026-01-22 11:50:00.993606649 +0000 UTC m=+135.737554990" Jan 22 11:50:01 crc kubenswrapper[5120]: I0122 11:50:01.033467 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:01 crc kubenswrapper[5120]: E0122 11:50:01.034990 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:01.534974351 +0000 UTC m=+136.278922692 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:01 crc kubenswrapper[5120]: I0122 11:50:01.139445 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:01 crc kubenswrapper[5120]: E0122 11:50:01.139932 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:01.639917241 +0000 UTC m=+136.383865582 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:01 crc kubenswrapper[5120]: I0122 11:50:01.240798 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:01 crc kubenswrapper[5120]: E0122 11:50:01.241325 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:01.741307195 +0000 UTC m=+136.485255536 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:01 crc kubenswrapper[5120]: I0122 11:50:01.257693 5120 ???:1] "http: TLS handshake error from 192.168.126.11:46344: no serving certificate available for the kubelet" Jan 22 11:50:01 crc kubenswrapper[5120]: I0122 11:50:01.341436 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-bg8p2" podStartSLOduration=116.341415469 podStartE2EDuration="1m56.341415469s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:01.313027972 +0000 UTC m=+136.056976313" watchObservedRunningTime="2026-01-22 11:50:01.341415469 +0000 UTC m=+136.085363810" Jan 22 11:50:01 crc kubenswrapper[5120]: I0122 11:50:01.343605 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:01 crc kubenswrapper[5120]: E0122 11:50:01.344350 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:01.84433781 +0000 UTC m=+136.588286151 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:01 crc kubenswrapper[5120]: I0122 11:50:01.434453 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fzgnb" podStartSLOduration=115.434437041 podStartE2EDuration="1m55.434437041s" podCreationTimestamp="2026-01-22 11:48:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:01.432457724 +0000 UTC m=+136.176406065" watchObservedRunningTime="2026-01-22 11:50:01.434437041 +0000 UTC m=+136.178385432" Jan 22 11:50:01 crc kubenswrapper[5120]: I0122 11:50:01.444578 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:01 crc kubenswrapper[5120]: E0122 11:50:01.445306 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:01.945288164 +0000 UTC m=+136.689236505 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:01 crc kubenswrapper[5120]: I0122 11:50:01.546995 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:01 crc kubenswrapper[5120]: E0122 11:50:01.547480 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:02.047464567 +0000 UTC m=+136.791412908 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:01 crc kubenswrapper[5120]: I0122 11:50:01.604581 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-67c89758df-6q5kp" podStartSLOduration=116.60456653 podStartE2EDuration="1m56.60456653s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:01.60375753 +0000 UTC m=+136.347705861" watchObservedRunningTime="2026-01-22 11:50:01.60456653 +0000 UTC m=+136.348514871" Jan 22 11:50:01 crc kubenswrapper[5120]: I0122 11:50:01.660780 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:01 crc kubenswrapper[5120]: E0122 11:50:01.661216 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:02.161200111 +0000 UTC m=+136.905148442 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:01 crc kubenswrapper[5120]: I0122 11:50:01.710566 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-p98m2"] Jan 22 11:50:01 crc kubenswrapper[5120]: I0122 11:50:01.740356 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-btnnz"] Jan 22 11:50:01 crc kubenswrapper[5120]: I0122 11:50:01.753920 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-dp8rm"] Jan 22 11:50:01 crc kubenswrapper[5120]: I0122 11:50:01.754169 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-7smqb" podStartSLOduration=117.754159601 podStartE2EDuration="1m57.754159601s" podCreationTimestamp="2026-01-22 11:48:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:01.744419426 +0000 UTC m=+136.488367767" watchObservedRunningTime="2026-01-22 11:50:01.754159601 +0000 UTC m=+136.498107942" Jan 22 11:50:01 crc kubenswrapper[5120]: I0122 11:50:01.763972 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:01 crc kubenswrapper[5120]: E0122 11:50:01.764334 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:02.264317847 +0000 UTC m=+137.008266188 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:01 crc kubenswrapper[5120]: I0122 11:50:01.797517 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-755bb95488-x2rhp" podStartSLOduration=116.79750075 podStartE2EDuration="1m56.79750075s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:01.784342982 +0000 UTC m=+136.528291323" watchObservedRunningTime="2026-01-22 11:50:01.79750075 +0000 UTC m=+136.541449091" Jan 22 11:50:01 crc kubenswrapper[5120]: I0122 11:50:01.798041 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-25dsq"] Jan 22 11:50:01 crc kubenswrapper[5120]: I0122 11:50:01.835550 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-68cf44c8b8-7x2rm" Jan 22 11:50:01 crc kubenswrapper[5120]: I0122 11:50:01.843328 5120 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-7x2rm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 11:50:01 crc kubenswrapper[5120]: [-]has-synced failed: reason withheld Jan 22 11:50:01 crc kubenswrapper[5120]: [+]process-running ok Jan 22 11:50:01 crc kubenswrapper[5120]: healthz check failed Jan 22 11:50:01 crc kubenswrapper[5120]: I0122 11:50:01.843374 5120 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-7x2rm" podUID="5e1bcfb8-8fae-4947-a078-c38b69596998" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 11:50:01 crc kubenswrapper[5120]: I0122 11:50:01.866816 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:01 crc kubenswrapper[5120]: E0122 11:50:01.867325 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:02.36729341 +0000 UTC m=+137.111241751 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:01 crc kubenswrapper[5120]: I0122 11:50:01.918244 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" podStartSLOduration=116.918219733 podStartE2EDuration="1m56.918219733s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:01.917658739 +0000 UTC m=+136.661607100" watchObservedRunningTime="2026-01-22 11:50:01.918219733 +0000 UTC m=+136.662168074" Jan 22 11:50:01 crc kubenswrapper[5120]: I0122 11:50:01.941643 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" event={"ID":"bebd6777-9b90-4b62-a3a9-360290cb39a9","Type":"ContainerStarted","Data":"743767c75fc8dbe2e21f07b80773fcf606c65fb144c9e4f33a6d600d11d2e9c8"} Jan 22 11:50:01 crc kubenswrapper[5120]: I0122 11:50:01.955123 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-lfqzp" podStartSLOduration=5.955094155 podStartE2EDuration="5.955094155s" podCreationTimestamp="2026-01-22 11:49:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:01.952878281 +0000 UTC m=+136.696826622" watchObservedRunningTime="2026-01-22 11:50:01.955094155 +0000 UTC m=+136.699042496" Jan 22 11:50:01 crc kubenswrapper[5120]: I0122 11:50:01.965918 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-btnnz" event={"ID":"a1372d1c-9557-4da9-b571-ea78602f491f","Type":"ContainerStarted","Data":"b4e65ce889ae38895c08c8b3c073e04a82886add3f94b20866369d763a5ff820"} Jan 22 11:50:01 crc kubenswrapper[5120]: I0122 11:50:01.975523 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:01 crc kubenswrapper[5120]: E0122 11:50:01.975927 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:02.47591302 +0000 UTC m=+137.219861361 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:01 crc kubenswrapper[5120]: I0122 11:50:01.980346 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-r4999" event={"ID":"f65e3321-2af5-4ab7-8765-36af9f3ecc9e","Type":"ContainerStarted","Data":"4b8f993793fd8643e52453a201a5cc1abefa2b347e4cfc0025261d8f963f557e"} Jan 22 11:50:01 crc kubenswrapper[5120]: I0122 11:50:01.989586 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-68cf44c8b8-7x2rm" podStartSLOduration=116.98956595 podStartE2EDuration="1m56.98956595s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:01.98955321 +0000 UTC m=+136.733501561" watchObservedRunningTime="2026-01-22 11:50:01.98956595 +0000 UTC m=+136.733514291" Jan 22 11:50:01 crc kubenswrapper[5120]: I0122 11:50:01.992235 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-p98m2" event={"ID":"699a5d41-d0b5-4d88-9448-4b3bad2cc424","Type":"ContainerStarted","Data":"bb52a0103c69a7acc2f01e1cf2c2aa3da57f29f3ee3ea7dad4c6521e26a391f2"} Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.013860 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-dp8rm" event={"ID":"da2b1465-54c1-4a7d-8cb6-755b28e448b8","Type":"ContainerStarted","Data":"c86e2026e3173b8f3a00b7ae25f6d3d62691c631cdb81827a9d224816c8b0cc0"} Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.017890 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-mddkn" event={"ID":"48ce43ae-5f5f-4ae6-91bd-98390a12c650","Type":"ContainerStarted","Data":"b6626dbcfe2359c8932616225dead34356537fe01ca973f60304e807a266661f"} Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.017925 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-multus/cni-sysctl-allowlist-ds-mddkn" Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.031480 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-v89hk"] Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.049206 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-69b85846b6-r4999" podStartSLOduration=117.049182783 podStartE2EDuration="1m57.049182783s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:02.04158865 +0000 UTC m=+136.785536991" watchObservedRunningTime="2026-01-22 11:50:02.049182783 +0000 UTC m=+136.793131124" Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.059742 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-6q7wl"] Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.067827 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-rkbh2"] Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.069730 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-fhxb8"] Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.075796 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-mddkn" podStartSLOduration=6.075770997 podStartE2EDuration="6.075770997s" podCreationTimestamp="2026-01-22 11:49:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:02.06514954 +0000 UTC m=+136.809097881" watchObservedRunningTime="2026-01-22 11:50:02.075770997 +0000 UTC m=+136.819719338" Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.076828 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:02 crc kubenswrapper[5120]: E0122 11:50:02.077025 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:02.577005847 +0000 UTC m=+137.320954188 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.077548 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:02 crc kubenswrapper[5120]: E0122 11:50:02.085084 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:02.585061752 +0000 UTC m=+137.329010093 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.089469 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-mddkn" Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.139527 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484705-g489w"] Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.149080 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-w9nlv"] Jan 22 11:50:02 crc kubenswrapper[5120]: W0122 11:50:02.156229 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42d89f76_66b8_4ffa_a63e_13582811b819.slice/crio-41a4bdc58f120bfa8b07e9a9fe672196e67770d90944de431f89c99808cd7281 WatchSource:0}: Error finding container 41a4bdc58f120bfa8b07e9a9fe672196e67770d90944de431f89c99808cd7281: Status 404 returned error can't find the container with id 41a4bdc58f120bfa8b07e9a9fe672196e67770d90944de431f89c99808cd7281 Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.176673 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-7q8jr"] Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.177843 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-7ghwq"] Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.178008 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:02 crc kubenswrapper[5120]: E0122 11:50:02.179546 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:02.679528548 +0000 UTC m=+137.423476879 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.181011 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-7nx8w"] Jan 22 11:50:02 crc kubenswrapper[5120]: W0122 11:50:02.189325 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podefec95f9_a526_41f9_bd7c_0d1bd2505eda.slice/crio-5e27aeaca4b8c9c6f21fbb8d2cb7043b2120f5c129ce2e0ca9f03a7b432feb29 WatchSource:0}: Error finding container 5e27aeaca4b8c9c6f21fbb8d2cb7043b2120f5c129ce2e0ca9f03a7b432feb29: Status 404 returned error can't find the container with id 5e27aeaca4b8c9c6f21fbb8d2cb7043b2120f5c129ce2e0ca9f03a7b432feb29 Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.233573 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-dc6zt"] Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.285052 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:02 crc kubenswrapper[5120]: E0122 11:50:02.285526 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:02.785509474 +0000 UTC m=+137.529457815 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.387639 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:02 crc kubenswrapper[5120]: E0122 11:50:02.388078 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:02.888061287 +0000 UTC m=+137.632009628 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.424166 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.424215 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.436061 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-2czqg"] Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.469066 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-llz79"] Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.480583 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-x78dg"] Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.497332 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-8wqc7"] Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.497716 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:02 crc kubenswrapper[5120]: E0122 11:50:02.498123 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:02.998108251 +0000 UTC m=+137.742056592 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.506679 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-bbphb"] Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.570714 5120 ???:1] "http: TLS handshake error from 192.168.126.11:46356: no serving certificate available for the kubelet" Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.598629 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:02 crc kubenswrapper[5120]: E0122 11:50:02.599125 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:03.099106806 +0000 UTC m=+137.843055137 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.606833 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-dpf6p"] Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.612384 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-fscmd"] Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.614627 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-kprrg"] Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.616140 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-nzfjl"] Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.639353 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-8p7x7"] Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.643596 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-j9r4l"] Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.644412 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-9hjpw"] Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.650779 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-d4ftw"] Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.655338 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-lsqq6"] Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.704559 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:02 crc kubenswrapper[5120]: E0122 11:50:02.705053 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:03.205037761 +0000 UTC m=+137.948986102 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.772522 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-tfhpf" Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.773390 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-8596bd845d-tfhpf" Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.792930 5120 patch_prober.go:28] interesting pod/apiserver-9ddfb9f55-xmvfk container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 22 11:50:02 crc kubenswrapper[5120]: [+]log ok Jan 22 11:50:02 crc kubenswrapper[5120]: [+]etcd ok Jan 22 11:50:02 crc kubenswrapper[5120]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 22 11:50:02 crc kubenswrapper[5120]: [+]poststarthook/generic-apiserver-start-informers ok Jan 22 11:50:02 crc kubenswrapper[5120]: [+]poststarthook/max-in-flight-filter ok Jan 22 11:50:02 crc kubenswrapper[5120]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 22 11:50:02 crc kubenswrapper[5120]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 22 11:50:02 crc kubenswrapper[5120]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 22 11:50:02 crc kubenswrapper[5120]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 22 11:50:02 crc kubenswrapper[5120]: [+]poststarthook/project.openshift.io-projectcache ok Jan 22 11:50:02 crc kubenswrapper[5120]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 22 11:50:02 crc kubenswrapper[5120]: [+]poststarthook/openshift.io-startinformers ok Jan 22 11:50:02 crc kubenswrapper[5120]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 22 11:50:02 crc kubenswrapper[5120]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 22 11:50:02 crc kubenswrapper[5120]: livez check failed Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.793027 5120 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" podUID="fd113660-b734-4d86-be8d-b28c5e9a328f" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.805931 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:02 crc kubenswrapper[5120]: E0122 11:50:02.806088 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:03.306055736 +0000 UTC m=+138.050004077 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.807614 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:02 crc kubenswrapper[5120]: E0122 11:50:02.808499 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:03.308480004 +0000 UTC m=+138.052428345 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.811277 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-8596bd845d-tfhpf" Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.847212 5120 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-7x2rm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 11:50:02 crc kubenswrapper[5120]: [-]has-synced failed: reason withheld Jan 22 11:50:02 crc kubenswrapper[5120]: [+]process-running ok Jan 22 11:50:02 crc kubenswrapper[5120]: healthz check failed Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.847305 5120 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-7x2rm" podUID="5e1bcfb8-8fae-4947-a078-c38b69596998" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.916808 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:02 crc kubenswrapper[5120]: E0122 11:50:02.917014 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:03.416979641 +0000 UTC m=+138.160927992 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.917442 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:02 crc kubenswrapper[5120]: E0122 11:50:02.919622 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:03.419610375 +0000 UTC m=+138.163558716 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.982740 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-mddkn"] Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.018628 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:03 crc kubenswrapper[5120]: E0122 11:50:03.019730 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:03.519684567 +0000 UTC m=+138.263632908 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.035460 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-2czqg" event={"ID":"c5f50cf9-ffda-418c-a80d-9612ce61d429","Type":"ContainerStarted","Data":"c2bce4c1bf92e03ad37ebd297aba1ae5b8d55a150e333ac2467aacf92a710870"} Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.043065 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-g489w" event={"ID":"2667e960-0d1a-4c78-97ea-b1852f27ce17","Type":"ContainerStarted","Data":"639c5a6f329d80d432312ff72463fef5484bc1f4f6098a9e08e4b8cc0e600243"} Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.043143 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-g489w" event={"ID":"2667e960-0d1a-4c78-97ea-b1852f27ce17","Type":"ContainerStarted","Data":"d4824bab9e53014c1adf60d5f2c167746888e2b25de0388cf1bcad99ffd70500"} Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.067869 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-dp8rm" event={"ID":"da2b1465-54c1-4a7d-8cb6-755b28e448b8","Type":"ContainerStarted","Data":"befaa8b061afb24db5ded6203043cc4365244227691b27affa20c097bdbf6a0d"} Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.069673 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-9hjpw" event={"ID":"52bf18ab-85c0-49e5-8b9d-9cb67ec54297","Type":"ContainerStarted","Data":"899a9550356d2757ef7a845a346e5ddf4b8ba184cc94e439cfad04ee675ac0e1"} Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.083532 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-v89hk" event={"ID":"9af7812b-a785-44ec-a8eb-eb72b9958b01","Type":"ContainerStarted","Data":"ef0d63016b930a7d2d0bf191f98942efe2f437bf1f68a1c9c908f87a19a250f1"} Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.083599 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-v89hk" event={"ID":"9af7812b-a785-44ec-a8eb-eb72b9958b01","Type":"ContainerStarted","Data":"3094d7e9ba17c9ab1583e83a2d32ca60d259e90dbacafccf4452636bb2978057"} Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.092656 5120 generic.go:358] "Generic (PLEG): container finished" podID="ea345128-daaf-464a-b774-8f8cf4c34aa5" containerID="fbe03ce179d82f4a2ede6b5469bb49d324c7240b14ddaaa5a1926a324d78ddab" exitCode=0 Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.092786 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-rkbh2" event={"ID":"ea345128-daaf-464a-b774-8f8cf4c34aa5","Type":"ContainerDied","Data":"fbe03ce179d82f4a2ede6b5469bb49d324c7240b14ddaaa5a1926a324d78ddab"} Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.092828 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-rkbh2" event={"ID":"ea345128-daaf-464a-b774-8f8cf4c34aa5","Type":"ContainerStarted","Data":"94a95916ba6c20e7e265226f4475b08186c01182f390e7a7c0a101de329d67d3"} Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.099832 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-j9r4l" event={"ID":"fbaf6c98-c3db-488e-878a-d0b1b9779ea2","Type":"ContainerStarted","Data":"d10a8dfcec5daab3a9a488965088ed110978b748a0ec0c4190c53ef88864734f"} Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.103102 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-w9nlv" event={"ID":"62b5ce4a-8844-4e22-8bf1-f1f89efa16f9","Type":"ContainerStarted","Data":"a9ea24eee9113231642066a13a4fd99a97b50d921271e0ef48a08228316952a0"} Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.107812 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-llz79" event={"ID":"d92ccf27-d679-4304-98b0-a6e74c7ffda2","Type":"ContainerStarted","Data":"434e986462f099155375feceb31a8b8f3026fc7d15e0c0cbc06b958683aba5e6"} Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.109085 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-bbphb" event={"ID":"7b273aff-e733-49a9-a191-88b0380500eb","Type":"ContainerStarted","Data":"b0232b053903bec4705b39998b2cc0e0f74928cb4c78d1d52b9b5fbd6c76a99d"} Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.109808 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-7ghwq" event={"ID":"e18ea538-05bd-4b11-b4ac-8cb8c0c9aef7","Type":"ContainerStarted","Data":"febcec9f14837798d972abe684ecefc5bf07c847f5fd7c83053c1150ee8cb9b0"} Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.115470 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-dc6zt" event={"ID":"d245a73a-a6cb-488c-91aa-8b3020511b47","Type":"ContainerStarted","Data":"127cb8b8804c604feb73da7c8989f3e988105a474877d2882d0b0c96d987f1bc"} Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.122643 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-g489w" podStartSLOduration=118.12262333 podStartE2EDuration="1m58.12262333s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:03.085089571 +0000 UTC m=+137.829037922" watchObservedRunningTime="2026-01-22 11:50:03.12262333 +0000 UTC m=+137.866571671" Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.124222 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-v89hk" podStartSLOduration=118.124213198 podStartE2EDuration="1m58.124213198s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:03.121582374 +0000 UTC m=+137.865530715" watchObservedRunningTime="2026-01-22 11:50:03.124213198 +0000 UTC m=+137.868161539" Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.125931 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:03 crc kubenswrapper[5120]: E0122 11:50:03.126351 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:03.626336709 +0000 UTC m=+138.370285050 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.138847 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-8wqc7" event={"ID":"503a8f02-4faa-4c71-a07b-e5cf7e21fd01","Type":"ContainerStarted","Data":"d8528f1b4f5cb882b0d0cbffc9ef67abd5f66a102718b79f4a9a880b70a1c016"} Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.148173 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-fscmd" event={"ID":"91b3eb8a-7090-484d-ae8f-8bbe990bce4d","Type":"ContainerStarted","Data":"526e958b4841f4433b7d50fc908effa496406b6b7a32311ca495d3654eb161eb"} Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.193498 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-dpf6p" event={"ID":"17d1692e-e64c-415e-98c6-fc0e5c799fe0","Type":"ContainerStarted","Data":"5b1a0b828474bfc01c65e742389b89ec9558f81701ba98898857a82e2cc1733f"} Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.203464 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-nzfjl" event={"ID":"061945e1-c5cb-4451-94ff-0fd4a53b4901","Type":"ContainerStarted","Data":"e4eebd2729568d2d066a7f64ceb7ea7e6dd372828feeab67282c37454a5292ea"} Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.211181 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-lsqq6" event={"ID":"d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2","Type":"ContainerStarted","Data":"ad2b08c045da56dd507b1d8f148e3fbb0995b2db33dc484cbb8c09b24c0839c1"} Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.228470 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:03 crc kubenswrapper[5120]: E0122 11:50:03.229924 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:03.729893976 +0000 UTC m=+138.473842317 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.236004 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-6q7wl" event={"ID":"42d89f76-66b8-4ffa-a63e-13582811b819","Type":"ContainerStarted","Data":"6a8e8302aee96bee35bf3c1544338cb73bc120649ab799b150033bc8dcb51d6e"} Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.236047 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-6q7wl" event={"ID":"42d89f76-66b8-4ffa-a63e-13582811b819","Type":"ContainerStarted","Data":"41a4bdc58f120bfa8b07e9a9fe672196e67770d90944de431f89c99808cd7281"} Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.238650 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-fhxb8" event={"ID":"3cc31b0e-b225-470f-870b-f89666eae47b","Type":"ContainerStarted","Data":"d8ea079f89246bd1fbb34ab5b932eccfa08a313f2ffab823a3e21c5008b83fdc"} Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.252870 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-d4ftw" event={"ID":"2380d23f-8320-4c77-9936-215ff48a32c8","Type":"ContainerStarted","Data":"ba91a3a11694780ec39b23d1182734e9b479730e20efef67a891f8fe6bb0c2d8"} Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.264734 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-6q7wl" podStartSLOduration=118.264719219 podStartE2EDuration="1m58.264719219s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:03.262865444 +0000 UTC m=+138.006813785" watchObservedRunningTime="2026-01-22 11:50:03.264719219 +0000 UTC m=+138.008667550" Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.297640 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-btnnz" event={"ID":"a1372d1c-9557-4da9-b571-ea78602f491f","Type":"ContainerStarted","Data":"16c718d9c5d36b12b8d36fa5390982626b48bb4bc88b5cd99d41f35d17e69f4d"} Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.303044 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-x78dg" event={"ID":"6edfa4a4-fdb6-420f-ba3b-d984c4784817","Type":"ContainerStarted","Data":"a442d4f6a16181914e40383f9e4e35d26c50b03d0476836bc68be6228e9550ed"} Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.304396 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-btnnz" Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.305159 5120 patch_prober.go:28] interesting pod/downloads-747b44746d-btnnz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.305210 5120 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-btnnz" podUID="a1372d1c-9557-4da9-b571-ea78602f491f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.314445 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-7q8jr" event={"ID":"efec95f9-a526-41f9-bd7c-0d1bd2505eda","Type":"ContainerStarted","Data":"007bbe8272bdf0401f433e76998ec3268713a17ef751ea13c15be6c502ee1eeb"} Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.314483 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-7q8jr" event={"ID":"efec95f9-a526-41f9-bd7c-0d1bd2505eda","Type":"ContainerStarted","Data":"5e27aeaca4b8c9c6f21fbb8d2cb7043b2120f5c129ce2e0ca9f03a7b432feb29"} Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.330898 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.331114 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-7nx8w" event={"ID":"bdf4dfdb-f473-480e-ae44-570e99cf695f","Type":"ContainerStarted","Data":"f68de55ac52c6339e204f32d0748489be6121e2e94c83fecb6bb5d3c34732042"} Jan 22 11:50:03 crc kubenswrapper[5120]: E0122 11:50:03.332238 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:03.832221504 +0000 UTC m=+138.576170025 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.343093 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-kprrg" event={"ID":"f7fc5383-db19-483a-afb9-23d3f8065a64","Type":"ContainerStarted","Data":"7673ac3fcebabf4424353dc66f7a11e0069424a23b7d11295e18f80b61d79380"} Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.346905 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-8p7x7" event={"ID":"9a52cc8b-fb68-4b1d-b91d-576f5ff59968","Type":"ContainerStarted","Data":"28156ee8b7a7afca6d74c5992a810f7e2ffb332e667e844aa1b362e5ce4abd79"} Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.352318 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-747b44746d-btnnz" podStartSLOduration=118.35229856 podStartE2EDuration="1m58.35229856s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:03.350760433 +0000 UTC m=+138.094708774" watchObservedRunningTime="2026-01-22 11:50:03.35229856 +0000 UTC m=+138.096246901" Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.357939 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-tfhpf" Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.388840 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-64d44f6ddf-7q8jr" podStartSLOduration=118.388821854 podStartE2EDuration="1m58.388821854s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:03.387922442 +0000 UTC m=+138.131870803" watchObservedRunningTime="2026-01-22 11:50:03.388821854 +0000 UTC m=+138.132770195" Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.432456 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:03 crc kubenswrapper[5120]: E0122 11:50:03.432640 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:03.932603053 +0000 UTC m=+138.676551404 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.537309 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:03 crc kubenswrapper[5120]: E0122 11:50:03.540391 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:04.040372472 +0000 UTC m=+138.784321023 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.638527 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:03 crc kubenswrapper[5120]: E0122 11:50:03.638811 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:04.138733203 +0000 UTC m=+138.882681554 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.741632 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:03 crc kubenswrapper[5120]: E0122 11:50:03.741974 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:04.241946713 +0000 UTC m=+138.985895054 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.837351 5120 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-7x2rm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 11:50:03 crc kubenswrapper[5120]: [-]has-synced failed: reason withheld Jan 22 11:50:03 crc kubenswrapper[5120]: [+]process-running ok Jan 22 11:50:03 crc kubenswrapper[5120]: healthz check failed Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.837677 5120 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-7x2rm" podUID="5e1bcfb8-8fae-4947-a078-c38b69596998" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.847284 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:03 crc kubenswrapper[5120]: E0122 11:50:03.847488 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:04.347456247 +0000 UTC m=+139.091404598 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.847997 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:03 crc kubenswrapper[5120]: E0122 11:50:03.848385 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:04.348371418 +0000 UTC m=+139.092319759 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.949208 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:03 crc kubenswrapper[5120]: E0122 11:50:03.949448 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:04.449427035 +0000 UTC m=+139.193375366 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.949548 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:03 crc kubenswrapper[5120]: E0122 11:50:03.949874 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:04.449866566 +0000 UTC m=+139.193814907 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.051562 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:04 crc kubenswrapper[5120]: E0122 11:50:04.052086 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:04.55206255 +0000 UTC m=+139.296010891 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.154766 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:04 crc kubenswrapper[5120]: E0122 11:50:04.155696 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:04.655679868 +0000 UTC m=+139.399628209 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.261527 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:04 crc kubenswrapper[5120]: E0122 11:50:04.261878 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:04.761862349 +0000 UTC m=+139.505810690 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.363142 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:04 crc kubenswrapper[5120]: E0122 11:50:04.363602 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:04.863581281 +0000 UTC m=+139.607529622 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.464268 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:04 crc kubenswrapper[5120]: E0122 11:50:04.464630 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:04.964613547 +0000 UTC m=+139.708561888 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.491279 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-w9nlv" event={"ID":"62b5ce4a-8844-4e22-8bf1-f1f89efa16f9","Type":"ContainerStarted","Data":"9941affb6709dc7abfb1f43681a46c60d52f6de70b929a67943cb1370c8fd373"} Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.506632 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-llz79" event={"ID":"d92ccf27-d679-4304-98b0-a6e74c7ffda2","Type":"ContainerStarted","Data":"8a87de682b363087100ea69063a3e51ccf7bc8d3ce129bd7b27c84641e63e998"} Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.531413 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-bbphb" event={"ID":"7b273aff-e733-49a9-a191-88b0380500eb","Type":"ContainerStarted","Data":"5755f3f8cb76bc4e385f79b124839e866c9facfe42f0fb11fb3a801b908c03b5"} Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.532452 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-bbphb" Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.547098 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-7ghwq" event={"ID":"e18ea538-05bd-4b11-b4ac-8cb8c0c9aef7","Type":"ContainerStarted","Data":"f07625f5a57807ccf1a7ab33ca4d5e50490e44251fcb68d3675d815050d7d9c3"} Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.548696 5120 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-bbphb container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:5443/healthz\": dial tcp 10.217.0.30:5443: connect: connection refused" start-of-body= Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.548789 5120 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-bbphb" podUID="7b273aff-e733-49a9-a191-88b0380500eb" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.30:5443/healthz\": dial tcp 10.217.0.30:5443: connect: connection refused" Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.551387 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-w9nlv" podStartSLOduration=119.551369857 podStartE2EDuration="1m59.551369857s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:04.540687339 +0000 UTC m=+139.284635680" watchObservedRunningTime="2026-01-22 11:50:04.551369857 +0000 UTC m=+139.295318198" Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.575524 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:04 crc kubenswrapper[5120]: E0122 11:50:04.576471 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:05.076148127 +0000 UTC m=+139.820096468 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.587163 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-74545575db-llz79" podStartSLOduration=118.587138213 podStartE2EDuration="1m58.587138213s" podCreationTimestamp="2026-01-22 11:48:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:04.58532707 +0000 UTC m=+139.329275411" watchObservedRunningTime="2026-01-22 11:50:04.587138213 +0000 UTC m=+139.331086554" Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.622002 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-p98m2" event={"ID":"699a5d41-d0b5-4d88-9448-4b3bad2cc424","Type":"ContainerStarted","Data":"3abd778c341b350cecb59fcea2a44b380e0f62616a7511299297758fe79feb78"} Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.674380 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-bbphb" podStartSLOduration=119.674364685 podStartE2EDuration="1m59.674364685s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:04.645680651 +0000 UTC m=+139.389629002" watchObservedRunningTime="2026-01-22 11:50:04.674364685 +0000 UTC m=+139.418313026" Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.680862 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:04 crc kubenswrapper[5120]: E0122 11:50:04.682264 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:05.182245396 +0000 UTC m=+139.926193737 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.683796 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:04 crc kubenswrapper[5120]: E0122 11:50:04.684194 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:05.184185102 +0000 UTC m=+139.928133443 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.739144 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-dc6zt" event={"ID":"d245a73a-a6cb-488c-91aa-8b3020511b47","Type":"ContainerStarted","Data":"7d0e0a090a75fb980644d31721fb0f0e506a56b7e1cb9e461cfc1a2cca3af806"} Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.739226 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-dc6zt" event={"ID":"d245a73a-a6cb-488c-91aa-8b3020511b47","Type":"ContainerStarted","Data":"0a0e4f7efdc416eba67887ef75bdaa29e042ecbaa58c917d79cad78328309cf1"} Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.750106 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-8wqc7" event={"ID":"503a8f02-4faa-4c71-a07b-e5cf7e21fd01","Type":"ContainerStarted","Data":"bdc3e6b1384933af8ca269d948a934cc0a4f49e72e9293fc226fb33bee4549ae"} Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.781923 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-7ghwq" podStartSLOduration=118.781893348 podStartE2EDuration="1m58.781893348s" podCreationTimestamp="2026-01-22 11:48:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:04.675792939 +0000 UTC m=+139.419741280" watchObservedRunningTime="2026-01-22 11:50:04.781893348 +0000 UTC m=+139.525841689" Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.785527 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-dc6zt" podStartSLOduration=119.785517506 podStartE2EDuration="1m59.785517506s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:04.769067558 +0000 UTC m=+139.513015899" watchObservedRunningTime="2026-01-22 11:50:04.785517506 +0000 UTC m=+139.529465847" Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.788105 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:04 crc kubenswrapper[5120]: E0122 11:50:04.788335 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:05.288304543 +0000 UTC m=+140.032252884 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.788704 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:04 crc kubenswrapper[5120]: E0122 11:50:04.790634 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:05.290617519 +0000 UTC m=+140.034565860 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.817341 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-fscmd" event={"ID":"91b3eb8a-7090-484d-ae8f-8bbe990bce4d","Type":"ContainerStarted","Data":"59a3a84ef4c046e7b0cf5d93ddd7acc2d57ddc93032f97d7d703a20d097a8712"} Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.818932 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-fscmd" Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.820134 5120 patch_prober.go:28] interesting pod/catalog-operator-75ff9f647d-fscmd container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.820212 5120 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-fscmd" podUID="91b3eb8a-7090-484d-ae8f-8bbe990bce4d" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.866379 5120 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-7x2rm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 11:50:04 crc kubenswrapper[5120]: [-]has-synced failed: reason withheld Jan 22 11:50:04 crc kubenswrapper[5120]: [+]process-running ok Jan 22 11:50:04 crc kubenswrapper[5120]: healthz check failed Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.866492 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-dpf6p" event={"ID":"17d1692e-e64c-415e-98c6-fc0e5c799fe0","Type":"ContainerStarted","Data":"c741f63c3c18c70fb74a3e1cc4574a0434a01a3203abe1ccedcf63dda5493f22"} Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.867515 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-dpf6p" Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.867795 5120 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-7x2rm" podUID="5e1bcfb8-8fae-4947-a078-c38b69596998" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.869088 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-fscmd" podStartSLOduration=119.869078168 podStartE2EDuration="1m59.869078168s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:04.868351731 +0000 UTC m=+139.612300092" watchObservedRunningTime="2026-01-22 11:50:04.869078168 +0000 UTC m=+139.613026509" Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.870224 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-8wqc7" podStartSLOduration=8.870220166 podStartE2EDuration="8.870220166s" podCreationTimestamp="2026-01-22 11:49:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:04.795305252 +0000 UTC m=+139.539253593" watchObservedRunningTime="2026-01-22 11:50:04.870220166 +0000 UTC m=+139.614168507" Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.894622 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:04 crc kubenswrapper[5120]: E0122 11:50:04.895582 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:05.395553019 +0000 UTC m=+140.139501360 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.895762 5120 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-dpf6p container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" start-of-body= Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.895808 5120 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-dpf6p" podUID="17d1692e-e64c-415e-98c6-fc0e5c799fe0" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.908788 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-dpf6p" podStartSLOduration=119.90876859 podStartE2EDuration="1m59.90876859s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:04.90875954 +0000 UTC m=+139.652707881" watchObservedRunningTime="2026-01-22 11:50:04.90876859 +0000 UTC m=+139.652716931" Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.957349 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-nzfjl" event={"ID":"061945e1-c5cb-4451-94ff-0fd4a53b4901","Type":"ContainerStarted","Data":"6f5cc6c8232538b29e5a9b7ed13d3006d4e09c36643a44bd6106b4e7cf50fade"} Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.982096 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-fhxb8" event={"ID":"3cc31b0e-b225-470f-870b-f89666eae47b","Type":"ContainerStarted","Data":"387a1eb56b33e4478745eda33301343f11a70f6ae6cef77a020e24bc1ac16505"} Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.996432 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:04 crc kubenswrapper[5120]: E0122 11:50:04.998006 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:05.497993589 +0000 UTC m=+140.241941930 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.021505 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-d4ftw" event={"ID":"2380d23f-8320-4c77-9936-215ff48a32c8","Type":"ContainerStarted","Data":"791f59633a93d596d8eb4e587137b7720856db7cf4bfdde86d83a235a7b3ff49"} Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.052108 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-nzfjl" podStartSLOduration=120.052091209 podStartE2EDuration="2m0.052091209s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:05.007224633 +0000 UTC m=+139.751172974" watchObservedRunningTime="2026-01-22 11:50:05.052091209 +0000 UTC m=+139.796039550" Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.053141 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-fhxb8" podStartSLOduration=120.053136695 podStartE2EDuration="2m0.053136695s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:05.051847223 +0000 UTC m=+139.795795564" watchObservedRunningTime="2026-01-22 11:50:05.053136695 +0000 UTC m=+139.797085036" Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.071755 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" event={"ID":"bebd6777-9b90-4b62-a3a9-360290cb39a9","Type":"ContainerStarted","Data":"1970871bfcc664e7bd0d7d614acf5222d8586ea1979edd4618dd7138b6e81a69"} Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.073218 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.085244 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-x78dg" event={"ID":"6edfa4a4-fdb6-420f-ba3b-d984c4784817","Type":"ContainerStarted","Data":"094eaef27fd0f4e410c581069ab7db755c9d3ea46b5479bbd5ac1c9b695c1271"} Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.085700 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-x78dg" Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.093970 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.097763 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.098099 5120 patch_prober.go:28] interesting pod/olm-operator-5cdf44d969-x78dg container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.098167 5120 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-x78dg" podUID="6edfa4a4-fdb6-420f-ba3b-d984c4784817" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.098690 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-7nx8w" event={"ID":"bdf4dfdb-f473-480e-ae44-570e99cf695f","Type":"ContainerStarted","Data":"0c1b1dbf4d25302aef4a3b8ca0cba857337e677baae977c3dc69f79fd0614971"} Jan 22 11:50:05 crc kubenswrapper[5120]: E0122 11:50:05.100109 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:05.600084821 +0000 UTC m=+140.344033292 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.107284 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-kprrg" event={"ID":"f7fc5383-db19-483a-afb9-23d3f8065a64","Type":"ContainerStarted","Data":"7342d3c268de1747605fecb029c4815f4dcb52ed39a25ee2a26379bce32b37de"} Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.109572 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-2czqg" event={"ID":"c5f50cf9-ffda-418c-a80d-9612ce61d429","Type":"ContainerStarted","Data":"b9dd7cadf64ccc3dac388ceebea1e51c1f42f8ece7fecc500e74806421639d00"} Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.112320 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-mddkn" podUID="48ce43ae-5f5f-4ae6-91bd-98390a12c650" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://b6626dbcfe2359c8932616225dead34356537fe01ca973f60304e807a266661f" gracePeriod=30 Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.112547 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-9hjpw" event={"ID":"52bf18ab-85c0-49e5-8b9d-9cb67ec54297","Type":"ContainerStarted","Data":"93c07ec969a9a97147849760ea410885f992ff649f728a54c90d117f74984d18"} Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.112592 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-9hjpw" Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.117091 5120 patch_prober.go:28] interesting pod/downloads-747b44746d-btnnz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.117131 5120 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-btnnz" podUID="a1372d1c-9557-4da9-b571-ea78602f491f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.127660 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" podStartSLOduration=120.127640758 podStartE2EDuration="2m0.127640758s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:05.121444697 +0000 UTC m=+139.865393038" watchObservedRunningTime="2026-01-22 11:50:05.127640758 +0000 UTC m=+139.871589099" Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.206853 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-x78dg" podStartSLOduration=120.206838035 podStartE2EDuration="2m0.206838035s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:05.206379324 +0000 UTC m=+139.950327665" watchObservedRunningTime="2026-01-22 11:50:05.206838035 +0000 UTC m=+139.950786376" Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.207779 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:05 crc kubenswrapper[5120]: E0122 11:50:05.208222 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:05.708208218 +0000 UTC m=+140.452156559 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.209671 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-kprrg" podStartSLOduration=120.209657613 podStartE2EDuration="2m0.209657613s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:05.153861763 +0000 UTC m=+139.897810094" watchObservedRunningTime="2026-01-22 11:50:05.209657613 +0000 UTC m=+139.953605954" Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.224740 5120 ???:1] "http: TLS handshake error from 192.168.126.11:49972: no serving certificate available for the kubelet" Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.297789 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-7nx8w" podStartSLOduration=120.297775296 podStartE2EDuration="2m0.297775296s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:05.29664653 +0000 UTC m=+140.040594871" watchObservedRunningTime="2026-01-22 11:50:05.297775296 +0000 UTC m=+140.041723637" Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.311724 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:05 crc kubenswrapper[5120]: E0122 11:50:05.311896 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:05.811868247 +0000 UTC m=+140.555816588 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.312054 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:05 crc kubenswrapper[5120]: E0122 11:50:05.313744 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:05.813731773 +0000 UTC m=+140.557680104 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.349664 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-2czqg" podStartSLOduration=120.349646762 podStartE2EDuration="2m0.349646762s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:05.327241419 +0000 UTC m=+140.071189770" watchObservedRunningTime="2026-01-22 11:50:05.349646762 +0000 UTC m=+140.093595103" Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.351683 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-9hjpw" podStartSLOduration=120.351676961 podStartE2EDuration="2m0.351676961s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:05.349336744 +0000 UTC m=+140.093285085" watchObservedRunningTime="2026-01-22 11:50:05.351676961 +0000 UTC m=+140.095625302" Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.413739 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.414084 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.414119 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.414164 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.414185 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 11:50:05 crc kubenswrapper[5120]: E0122 11:50:05.415331 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:05.915308021 +0000 UTC m=+140.659256362 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.416204 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.455999 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.456053 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.458691 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.515062 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:05 crc kubenswrapper[5120]: E0122 11:50:05.515331 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:06.015319653 +0000 UTC m=+140.759267994 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.616493 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:05 crc kubenswrapper[5120]: E0122 11:50:05.616895 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:06.116872581 +0000 UTC m=+140.860820912 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.712145 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.719725 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.719911 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dababdca-8afb-452f-865f-54de3aec21d9-metrics-certs\") pod \"network-metrics-daemon-ldwx4\" (UID: \"dababdca-8afb-452f-865f-54de3aec21d9\") " pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:50:05 crc kubenswrapper[5120]: E0122 11:50:05.720265 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:06.220250514 +0000 UTC m=+140.964199015 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.728243 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.738791 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.744883 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dababdca-8afb-452f-865f-54de3aec21d9-metrics-certs\") pod \"network-metrics-daemon-ldwx4\" (UID: \"dababdca-8afb-452f-865f-54de3aec21d9\") " pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.821119 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:05 crc kubenswrapper[5120]: E0122 11:50:05.821477 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:06.321461534 +0000 UTC m=+141.065409875 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.839357 5120 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-7x2rm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 11:50:05 crc kubenswrapper[5120]: [-]has-synced failed: reason withheld Jan 22 11:50:05 crc kubenswrapper[5120]: [+]process-running ok Jan 22 11:50:05 crc kubenswrapper[5120]: healthz check failed Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.839425 5120 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-7x2rm" podUID="5e1bcfb8-8fae-4947-a078-c38b69596998" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.923851 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:05 crc kubenswrapper[5120]: E0122 11:50:05.924617 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:06.424604811 +0000 UTC m=+141.168553152 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.998608 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:50:06 crc kubenswrapper[5120]: I0122 11:50:06.030915 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:06 crc kubenswrapper[5120]: E0122 11:50:06.031237 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:06.531212732 +0000 UTC m=+141.275161073 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:06 crc kubenswrapper[5120]: I0122 11:50:06.144790 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:06 crc kubenswrapper[5120]: E0122 11:50:06.145336 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:06.645310444 +0000 UTC m=+141.389258835 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:06 crc kubenswrapper[5120]: I0122 11:50:06.198402 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-8p7x7" event={"ID":"9a52cc8b-fb68-4b1d-b91d-576f5ff59968","Type":"ContainerStarted","Data":"5db6d5c2a50a5f39d1371e72b2ed006990a86cb1cdf403f04c06514a998955eb"} Jan 22 11:50:06 crc kubenswrapper[5120]: I0122 11:50:06.210817 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-2czqg" event={"ID":"c5f50cf9-ffda-418c-a80d-9612ce61d429","Type":"ContainerStarted","Data":"04cb570fb9c1caf8e1173096ac531c588bccaf8d9c558291659f8a8ecc1b5591"} Jan 22 11:50:06 crc kubenswrapper[5120]: I0122 11:50:06.222705 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-dp8rm" event={"ID":"da2b1465-54c1-4a7d-8cb6-755b28e448b8","Type":"ContainerStarted","Data":"4022e6efe31ee6e9b9d8d1bec5819639bca01690ec2bb41567262317dee3871b"} Jan 22 11:50:06 crc kubenswrapper[5120]: I0122 11:50:06.257248 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:06 crc kubenswrapper[5120]: E0122 11:50:06.259827 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:06.759796385 +0000 UTC m=+141.503744726 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:06 crc kubenswrapper[5120]: I0122 11:50:06.281607 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-9hjpw" event={"ID":"52bf18ab-85c0-49e5-8b9d-9cb67ec54297","Type":"ContainerStarted","Data":"ca6b1a8ab55ad362e9597c1e1763f6036b5828d5534e485bd60767c936bf6289"} Jan 22 11:50:06 crc kubenswrapper[5120]: I0122 11:50:06.291821 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-8p7x7" podStartSLOduration=121.29180342 podStartE2EDuration="2m1.29180342s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:06.237478236 +0000 UTC m=+140.981426577" watchObservedRunningTime="2026-01-22 11:50:06.29180342 +0000 UTC m=+141.035751761" Jan 22 11:50:06 crc kubenswrapper[5120]: I0122 11:50:06.315504 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-rkbh2" event={"ID":"ea345128-daaf-464a-b774-8f8cf4c34aa5","Type":"ContainerStarted","Data":"ab3b328971f28941ddd22b65e7d5163afcdd956d4b27632a333bba7d1084f7d5"} Jan 22 11:50:06 crc kubenswrapper[5120]: I0122 11:50:06.315651 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-config-operator/openshift-config-operator-5777786469-rkbh2" Jan 22 11:50:06 crc kubenswrapper[5120]: I0122 11:50:06.317368 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-j9r4l" event={"ID":"fbaf6c98-c3db-488e-878a-d0b1b9779ea2","Type":"ContainerStarted","Data":"f3ed1e9d17f07b9ff8ff641e16c8ec8290b3f92da37e8725b5f87b1b6bea3441"} Jan 22 11:50:06 crc kubenswrapper[5120]: I0122 11:50:06.341330 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-p98m2" event={"ID":"699a5d41-d0b5-4d88-9448-4b3bad2cc424","Type":"ContainerStarted","Data":"969ed470d9d29c96fa6df5866c8ecf203eef71170171a74d2b9244433f7cc9e8"} Jan 22 11:50:06 crc kubenswrapper[5120]: I0122 11:50:06.360213 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:06 crc kubenswrapper[5120]: E0122 11:50:06.361140 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:06.861120768 +0000 UTC m=+141.605069109 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:06 crc kubenswrapper[5120]: I0122 11:50:06.368610 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-nzfjl" event={"ID":"061945e1-c5cb-4451-94ff-0fd4a53b4901","Type":"ContainerStarted","Data":"cecffe071504d0e8652bb22e3a48afb206a542a02b08701c3ce5860661e9b90a"} Jan 22 11:50:06 crc kubenswrapper[5120]: I0122 11:50:06.371216 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-69db94689b-dp8rm" podStartSLOduration=121.371185652 podStartE2EDuration="2m1.371185652s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:06.290336444 +0000 UTC m=+141.034284785" watchObservedRunningTime="2026-01-22 11:50:06.371185652 +0000 UTC m=+141.115133993" Jan 22 11:50:06 crc kubenswrapper[5120]: I0122 11:50:06.372893 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-5777786469-rkbh2" podStartSLOduration=121.372885163 podStartE2EDuration="2m1.372885163s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:06.372257118 +0000 UTC m=+141.116205469" watchObservedRunningTime="2026-01-22 11:50:06.372885163 +0000 UTC m=+141.116833504" Jan 22 11:50:06 crc kubenswrapper[5120]: I0122 11:50:06.394948 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-d4ftw" event={"ID":"2380d23f-8320-4c77-9936-215ff48a32c8","Type":"ContainerStarted","Data":"d3493a2f20fcbd2363d735c66f1e13a922ac0c1bbf899a57e020458338cc9f0f"} Jan 22 11:50:06 crc kubenswrapper[5120]: I0122 11:50:06.395028 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-d4ftw" Jan 22 11:50:06 crc kubenswrapper[5120]: I0122 11:50:06.406316 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-799b87ffcd-p98m2" podStartSLOduration=121.406289112 podStartE2EDuration="2m1.406289112s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:06.403378111 +0000 UTC m=+141.147326472" watchObservedRunningTime="2026-01-22 11:50:06.406289112 +0000 UTC m=+141.150237453" Jan 22 11:50:06 crc kubenswrapper[5120]: I0122 11:50:06.424042 5120 patch_prober.go:28] interesting pod/downloads-747b44746d-btnnz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 22 11:50:06 crc kubenswrapper[5120]: I0122 11:50:06.424118 5120 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-btnnz" podUID="a1372d1c-9557-4da9-b571-ea78602f491f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 22 11:50:06 crc kubenswrapper[5120]: I0122 11:50:06.426240 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-kprrg" event={"ID":"f7fc5383-db19-483a-afb9-23d3f8065a64","Type":"ContainerStarted","Data":"89b310b1c6888b42555a029d549dad764a92a398406d68db44d007c1bac7a1d5"} Jan 22 11:50:06 crc kubenswrapper[5120]: I0122 11:50:06.429779 5120 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-dpf6p container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" start-of-body= Jan 22 11:50:06 crc kubenswrapper[5120]: I0122 11:50:06.429854 5120 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-dpf6p" podUID="17d1692e-e64c-415e-98c6-fc0e5c799fe0" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" Jan 22 11:50:06 crc kubenswrapper[5120]: I0122 11:50:06.434622 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-fscmd" Jan 22 11:50:06 crc kubenswrapper[5120]: I0122 11:50:06.435059 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-bbphb" Jan 22 11:50:06 crc kubenswrapper[5120]: I0122 11:50:06.442200 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-j9r4l" podStartSLOduration=121.442184391 podStartE2EDuration="2m1.442184391s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:06.441407612 +0000 UTC m=+141.185355983" watchObservedRunningTime="2026-01-22 11:50:06.442184391 +0000 UTC m=+141.186132722" Jan 22 11:50:06 crc kubenswrapper[5120]: I0122 11:50:06.464385 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:06 crc kubenswrapper[5120]: E0122 11:50:06.470301 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:06.970266091 +0000 UTC m=+141.714214432 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:06 crc kubenswrapper[5120]: I0122 11:50:06.517428 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-x78dg" Jan 22 11:50:06 crc kubenswrapper[5120]: I0122 11:50:06.572807 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:06 crc kubenswrapper[5120]: E0122 11:50:06.573254 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:07.073240543 +0000 UTC m=+141.817188884 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:06 crc kubenswrapper[5120]: I0122 11:50:06.583854 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-d4ftw" podStartSLOduration=10.58382328 podStartE2EDuration="10.58382328s" podCreationTimestamp="2026-01-22 11:49:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:06.563738434 +0000 UTC m=+141.307686775" watchObservedRunningTime="2026-01-22 11:50:06.58382328 +0000 UTC m=+141.327771621" Jan 22 11:50:06 crc kubenswrapper[5120]: I0122 11:50:06.675905 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:06 crc kubenswrapper[5120]: E0122 11:50:06.676508 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:07.176482283 +0000 UTC m=+141.920430624 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:06 crc kubenswrapper[5120]: I0122 11:50:06.779371 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:06 crc kubenswrapper[5120]: E0122 11:50:06.779922 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:07.279898826 +0000 UTC m=+142.023847167 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:06 crc kubenswrapper[5120]: I0122 11:50:06.855720 5120 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-7x2rm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 11:50:06 crc kubenswrapper[5120]: [-]has-synced failed: reason withheld Jan 22 11:50:06 crc kubenswrapper[5120]: [+]process-running ok Jan 22 11:50:06 crc kubenswrapper[5120]: healthz check failed Jan 22 11:50:06 crc kubenswrapper[5120]: I0122 11:50:06.855822 5120 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-7x2rm" podUID="5e1bcfb8-8fae-4947-a078-c38b69596998" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 11:50:06 crc kubenswrapper[5120]: I0122 11:50:06.881272 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:06 crc kubenswrapper[5120]: E0122 11:50:06.881499 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:07.381481955 +0000 UTC m=+142.125430296 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:06 crc kubenswrapper[5120]: I0122 11:50:06.891889 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-ldwx4"] Jan 22 11:50:06 crc kubenswrapper[5120]: I0122 11:50:06.983101 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:06 crc kubenswrapper[5120]: E0122 11:50:06.984161 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:07.484140211 +0000 UTC m=+142.228088552 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:07 crc kubenswrapper[5120]: I0122 11:50:07.084577 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:07 crc kubenswrapper[5120]: E0122 11:50:07.084814 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:07.584769817 +0000 UTC m=+142.328718158 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:07 crc kubenswrapper[5120]: I0122 11:50:07.085409 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:07 crc kubenswrapper[5120]: E0122 11:50:07.085929 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:07.585922034 +0000 UTC m=+142.329870375 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:07 crc kubenswrapper[5120]: I0122 11:50:07.187731 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:07 crc kubenswrapper[5120]: E0122 11:50:07.188387 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:07.688362475 +0000 UTC m=+142.432310816 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:07 crc kubenswrapper[5120]: I0122 11:50:07.290403 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:07 crc kubenswrapper[5120]: E0122 11:50:07.290977 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:07.790935278 +0000 UTC m=+142.534883619 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:07 crc kubenswrapper[5120]: I0122 11:50:07.391425 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:07 crc kubenswrapper[5120]: E0122 11:50:07.391664 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:07.891621105 +0000 UTC m=+142.635569446 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:07 crc kubenswrapper[5120]: I0122 11:50:07.392209 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:07 crc kubenswrapper[5120]: E0122 11:50:07.392592 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:07.892573148 +0000 UTC m=+142.636521489 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:07 crc kubenswrapper[5120]: I0122 11:50:07.437050 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" Jan 22 11:50:07 crc kubenswrapper[5120]: I0122 11:50:07.443933 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"7b1e9d102e8db0363bd0252ff2d3d00b8a64dd89f5bd3ea4ae489c7f13d84514"} Jan 22 11:50:07 crc kubenswrapper[5120]: I0122 11:50:07.443988 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"796f22c539f5b6405b5ebc58626c46e7b6b342ace4debb24f86e9df355d739a2"} Jan 22 11:50:07 crc kubenswrapper[5120]: I0122 11:50:07.447880 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" Jan 22 11:50:07 crc kubenswrapper[5120]: I0122 11:50:07.447938 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-ldwx4" event={"ID":"dababdca-8afb-452f-865f-54de3aec21d9","Type":"ContainerStarted","Data":"8ca93c0558816b104d72abd3a4b7d593f0ad30aac045d6bc43a55c7bcea24291"} Jan 22 11:50:07 crc kubenswrapper[5120]: I0122 11:50:07.469691 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"51a967e2e9bf24bc1a6860f69d464a517ec8466b18d4a6637df0d203fec7f26e"} Jan 22 11:50:07 crc kubenswrapper[5120]: I0122 11:50:07.469753 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"3bff3a4f31db2c3e5cbebb768b5ec3a31c9ecb75cdb704dc9641d07c2f7d724b"} Jan 22 11:50:07 crc kubenswrapper[5120]: I0122 11:50:07.493318 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:07 crc kubenswrapper[5120]: E0122 11:50:07.494018 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:07.994001413 +0000 UTC m=+142.737949754 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:07 crc kubenswrapper[5120]: I0122 11:50:07.505113 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"8a2c2cf1b1643793202feaa8cef3107f80f72418ea90c91c881bcca9bcd54a04"} Jan 22 11:50:07 crc kubenswrapper[5120]: I0122 11:50:07.505180 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"2c84408190bcdbdd8efdacba3a20b75ea91752e65973998a0c50653ee3161892"} Jan 22 11:50:07 crc kubenswrapper[5120]: I0122 11:50:07.505513 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 11:50:07 crc kubenswrapper[5120]: I0122 11:50:07.523031 5120 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-dpf6p container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" start-of-body= Jan 22 11:50:07 crc kubenswrapper[5120]: I0122 11:50:07.523097 5120 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-dpf6p" podUID="17d1692e-e64c-415e-98c6-fc0e5c799fe0" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" Jan 22 11:50:07 crc kubenswrapper[5120]: I0122 11:50:07.603363 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:07 crc kubenswrapper[5120]: E0122 11:50:07.605745 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:08.105730138 +0000 UTC m=+142.849678479 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:07 crc kubenswrapper[5120]: I0122 11:50:07.705365 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:07 crc kubenswrapper[5120]: E0122 11:50:07.705807 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:08.20578388 +0000 UTC m=+142.949732221 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:07 crc kubenswrapper[5120]: I0122 11:50:07.723540 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2q8d8"] Jan 22 11:50:07 crc kubenswrapper[5120]: I0122 11:50:07.806855 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:07 crc kubenswrapper[5120]: E0122 11:50:07.807282 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:08.307264817 +0000 UTC m=+143.051213158 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:07 crc kubenswrapper[5120]: I0122 11:50:07.837060 5120 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-7x2rm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 11:50:07 crc kubenswrapper[5120]: [-]has-synced failed: reason withheld Jan 22 11:50:07 crc kubenswrapper[5120]: [+]process-running ok Jan 22 11:50:07 crc kubenswrapper[5120]: healthz check failed Jan 22 11:50:07 crc kubenswrapper[5120]: I0122 11:50:07.837199 5120 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-7x2rm" podUID="5e1bcfb8-8fae-4947-a078-c38b69596998" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 11:50:07 crc kubenswrapper[5120]: I0122 11:50:07.910223 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:07 crc kubenswrapper[5120]: E0122 11:50:07.910788 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:08.410763313 +0000 UTC m=+143.154711654 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.011645 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:08 crc kubenswrapper[5120]: E0122 11:50:08.011997 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:08.511983843 +0000 UTC m=+143.255932184 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.112837 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:08 crc kubenswrapper[5120]: E0122 11:50:08.113009 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:08.612984338 +0000 UTC m=+143.356932689 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.113092 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:08 crc kubenswrapper[5120]: E0122 11:50:08.113388 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:08.613380547 +0000 UTC m=+143.357328888 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.214216 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:08 crc kubenswrapper[5120]: E0122 11:50:08.214632 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:08.714614909 +0000 UTC m=+143.458563250 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.240438 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2q8d8"] Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.240484 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fztfm"] Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.240547 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2q8d8" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.245337 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fztfm"] Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.245526 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fztfm" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.245705 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.248444 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.263072 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-p26dp"] Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.267379 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p26dp" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.281747 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p26dp"] Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.314569 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tbgcq"] Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.315316 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gp5qp\" (UniqueName: \"kubernetes.io/projected/089fc2c1-8274-4532-a14a-21194d01a310-kube-api-access-gp5qp\") pod \"certified-operators-p26dp\" (UID: \"089fc2c1-8274-4532-a14a-21194d01a310\") " pod="openshift-marketplace/certified-operators-p26dp" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.315384 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gztht\" (UniqueName: \"kubernetes.io/projected/ed489f01-1188-4d6f-9ed4-9618fddf1eab-kube-api-access-gztht\") pod \"community-operators-2q8d8\" (UID: \"ed489f01-1188-4d6f-9ed4-9618fddf1eab\") " pod="openshift-marketplace/community-operators-2q8d8" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.315402 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/089fc2c1-8274-4532-a14a-21194d01a310-utilities\") pod \"certified-operators-p26dp\" (UID: \"089fc2c1-8274-4532-a14a-21194d01a310\") " pod="openshift-marketplace/certified-operators-p26dp" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.315531 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vctvt\" (UniqueName: \"kubernetes.io/projected/4f669e70-10cd-47da-abc9-84be80cb5cfb-kube-api-access-vctvt\") pod \"certified-operators-fztfm\" (UID: \"4f669e70-10cd-47da-abc9-84be80cb5cfb\") " pod="openshift-marketplace/certified-operators-fztfm" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.315682 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.315858 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f669e70-10cd-47da-abc9-84be80cb5cfb-utilities\") pod \"certified-operators-fztfm\" (UID: \"4f669e70-10cd-47da-abc9-84be80cb5cfb\") " pod="openshift-marketplace/certified-operators-fztfm" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.315892 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/089fc2c1-8274-4532-a14a-21194d01a310-catalog-content\") pod \"certified-operators-p26dp\" (UID: \"089fc2c1-8274-4532-a14a-21194d01a310\") " pod="openshift-marketplace/certified-operators-p26dp" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.315971 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed489f01-1188-4d6f-9ed4-9618fddf1eab-utilities\") pod \"community-operators-2q8d8\" (UID: \"ed489f01-1188-4d6f-9ed4-9618fddf1eab\") " pod="openshift-marketplace/community-operators-2q8d8" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.316089 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed489f01-1188-4d6f-9ed4-9618fddf1eab-catalog-content\") pod \"community-operators-2q8d8\" (UID: \"ed489f01-1188-4d6f-9ed4-9618fddf1eab\") " pod="openshift-marketplace/community-operators-2q8d8" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.316163 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f669e70-10cd-47da-abc9-84be80cb5cfb-catalog-content\") pod \"certified-operators-fztfm\" (UID: \"4f669e70-10cd-47da-abc9-84be80cb5cfb\") " pod="openshift-marketplace/certified-operators-fztfm" Jan 22 11:50:08 crc kubenswrapper[5120]: E0122 11:50:08.316614 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:08.816597317 +0000 UTC m=+143.560545658 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.346561 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tbgcq" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.347011 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tbgcq"] Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.368357 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-5777786469-rkbh2" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.417127 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.417358 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed489f01-1188-4d6f-9ed4-9618fddf1eab-catalog-content\") pod \"community-operators-2q8d8\" (UID: \"ed489f01-1188-4d6f-9ed4-9618fddf1eab\") " pod="openshift-marketplace/community-operators-2q8d8" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.417392 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqkzj\" (UniqueName: \"kubernetes.io/projected/3e95505c-a7eb-4d9f-be2f-e7129e3643b8-kube-api-access-zqkzj\") pod \"community-operators-tbgcq\" (UID: \"3e95505c-a7eb-4d9f-be2f-e7129e3643b8\") " pod="openshift-marketplace/community-operators-tbgcq" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.417415 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f669e70-10cd-47da-abc9-84be80cb5cfb-catalog-content\") pod \"certified-operators-fztfm\" (UID: \"4f669e70-10cd-47da-abc9-84be80cb5cfb\") " pod="openshift-marketplace/certified-operators-fztfm" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.417437 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gp5qp\" (UniqueName: \"kubernetes.io/projected/089fc2c1-8274-4532-a14a-21194d01a310-kube-api-access-gp5qp\") pod \"certified-operators-p26dp\" (UID: \"089fc2c1-8274-4532-a14a-21194d01a310\") " pod="openshift-marketplace/certified-operators-p26dp" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.417468 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e95505c-a7eb-4d9f-be2f-e7129e3643b8-utilities\") pod \"community-operators-tbgcq\" (UID: \"3e95505c-a7eb-4d9f-be2f-e7129e3643b8\") " pod="openshift-marketplace/community-operators-tbgcq" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.417496 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gztht\" (UniqueName: \"kubernetes.io/projected/ed489f01-1188-4d6f-9ed4-9618fddf1eab-kube-api-access-gztht\") pod \"community-operators-2q8d8\" (UID: \"ed489f01-1188-4d6f-9ed4-9618fddf1eab\") " pod="openshift-marketplace/community-operators-2q8d8" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.417510 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/089fc2c1-8274-4532-a14a-21194d01a310-utilities\") pod \"certified-operators-p26dp\" (UID: \"089fc2c1-8274-4532-a14a-21194d01a310\") " pod="openshift-marketplace/certified-operators-p26dp" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.417528 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vctvt\" (UniqueName: \"kubernetes.io/projected/4f669e70-10cd-47da-abc9-84be80cb5cfb-kube-api-access-vctvt\") pod \"certified-operators-fztfm\" (UID: \"4f669e70-10cd-47da-abc9-84be80cb5cfb\") " pod="openshift-marketplace/certified-operators-fztfm" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.417550 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e95505c-a7eb-4d9f-be2f-e7129e3643b8-catalog-content\") pod \"community-operators-tbgcq\" (UID: \"3e95505c-a7eb-4d9f-be2f-e7129e3643b8\") " pod="openshift-marketplace/community-operators-tbgcq" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.417597 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f669e70-10cd-47da-abc9-84be80cb5cfb-utilities\") pod \"certified-operators-fztfm\" (UID: \"4f669e70-10cd-47da-abc9-84be80cb5cfb\") " pod="openshift-marketplace/certified-operators-fztfm" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.417612 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/089fc2c1-8274-4532-a14a-21194d01a310-catalog-content\") pod \"certified-operators-p26dp\" (UID: \"089fc2c1-8274-4532-a14a-21194d01a310\") " pod="openshift-marketplace/certified-operators-p26dp" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.417640 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed489f01-1188-4d6f-9ed4-9618fddf1eab-utilities\") pod \"community-operators-2q8d8\" (UID: \"ed489f01-1188-4d6f-9ed4-9618fddf1eab\") " pod="openshift-marketplace/community-operators-2q8d8" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.418069 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed489f01-1188-4d6f-9ed4-9618fddf1eab-utilities\") pod \"community-operators-2q8d8\" (UID: \"ed489f01-1188-4d6f-9ed4-9618fddf1eab\") " pod="openshift-marketplace/community-operators-2q8d8" Jan 22 11:50:08 crc kubenswrapper[5120]: E0122 11:50:08.418144 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:08.918125615 +0000 UTC m=+143.662073956 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.418375 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed489f01-1188-4d6f-9ed4-9618fddf1eab-catalog-content\") pod \"community-operators-2q8d8\" (UID: \"ed489f01-1188-4d6f-9ed4-9618fddf1eab\") " pod="openshift-marketplace/community-operators-2q8d8" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.418734 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f669e70-10cd-47da-abc9-84be80cb5cfb-catalog-content\") pod \"certified-operators-fztfm\" (UID: \"4f669e70-10cd-47da-abc9-84be80cb5cfb\") " pod="openshift-marketplace/certified-operators-fztfm" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.419484 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f669e70-10cd-47da-abc9-84be80cb5cfb-utilities\") pod \"certified-operators-fztfm\" (UID: \"4f669e70-10cd-47da-abc9-84be80cb5cfb\") " pod="openshift-marketplace/certified-operators-fztfm" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.419747 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/089fc2c1-8274-4532-a14a-21194d01a310-utilities\") pod \"certified-operators-p26dp\" (UID: \"089fc2c1-8274-4532-a14a-21194d01a310\") " pod="openshift-marketplace/certified-operators-p26dp" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.419840 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/089fc2c1-8274-4532-a14a-21194d01a310-catalog-content\") pod \"certified-operators-p26dp\" (UID: \"089fc2c1-8274-4532-a14a-21194d01a310\") " pod="openshift-marketplace/certified-operators-p26dp" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.472747 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gztht\" (UniqueName: \"kubernetes.io/projected/ed489f01-1188-4d6f-9ed4-9618fddf1eab-kube-api-access-gztht\") pod \"community-operators-2q8d8\" (UID: \"ed489f01-1188-4d6f-9ed4-9618fddf1eab\") " pod="openshift-marketplace/community-operators-2q8d8" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.472745 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vctvt\" (UniqueName: \"kubernetes.io/projected/4f669e70-10cd-47da-abc9-84be80cb5cfb-kube-api-access-vctvt\") pod \"certified-operators-fztfm\" (UID: \"4f669e70-10cd-47da-abc9-84be80cb5cfb\") " pod="openshift-marketplace/certified-operators-fztfm" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.488029 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gp5qp\" (UniqueName: \"kubernetes.io/projected/089fc2c1-8274-4532-a14a-21194d01a310-kube-api-access-gp5qp\") pod \"certified-operators-p26dp\" (UID: \"089fc2c1-8274-4532-a14a-21194d01a310\") " pod="openshift-marketplace/certified-operators-p26dp" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.518821 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e95505c-a7eb-4d9f-be2f-e7129e3643b8-utilities\") pod \"community-operators-tbgcq\" (UID: \"3e95505c-a7eb-4d9f-be2f-e7129e3643b8\") " pod="openshift-marketplace/community-operators-tbgcq" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.518886 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e95505c-a7eb-4d9f-be2f-e7129e3643b8-catalog-content\") pod \"community-operators-tbgcq\" (UID: \"3e95505c-a7eb-4d9f-be2f-e7129e3643b8\") " pod="openshift-marketplace/community-operators-tbgcq" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.518917 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.519000 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zqkzj\" (UniqueName: \"kubernetes.io/projected/3e95505c-a7eb-4d9f-be2f-e7129e3643b8-kube-api-access-zqkzj\") pod \"community-operators-tbgcq\" (UID: \"3e95505c-a7eb-4d9f-be2f-e7129e3643b8\") " pod="openshift-marketplace/community-operators-tbgcq" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.520492 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e95505c-a7eb-4d9f-be2f-e7129e3643b8-utilities\") pod \"community-operators-tbgcq\" (UID: \"3e95505c-a7eb-4d9f-be2f-e7129e3643b8\") " pod="openshift-marketplace/community-operators-tbgcq" Jan 22 11:50:08 crc kubenswrapper[5120]: E0122 11:50:08.520933 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:09.020913453 +0000 UTC m=+143.764861784 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.528441 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e95505c-a7eb-4d9f-be2f-e7129e3643b8-catalog-content\") pod \"community-operators-tbgcq\" (UID: \"3e95505c-a7eb-4d9f-be2f-e7129e3643b8\") " pod="openshift-marketplace/community-operators-tbgcq" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.563307 5120 generic.go:358] "Generic (PLEG): container finished" podID="2667e960-0d1a-4c78-97ea-b1852f27ce17" containerID="639c5a6f329d80d432312ff72463fef5484bc1f4f6098a9e08e4b8cc0e600243" exitCode=0 Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.563454 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-g489w" event={"ID":"2667e960-0d1a-4c78-97ea-b1852f27ce17","Type":"ContainerDied","Data":"639c5a6f329d80d432312ff72463fef5484bc1f4f6098a9e08e4b8cc0e600243"} Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.570934 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqkzj\" (UniqueName: \"kubernetes.io/projected/3e95505c-a7eb-4d9f-be2f-e7129e3643b8-kube-api-access-zqkzj\") pod \"community-operators-tbgcq\" (UID: \"3e95505c-a7eb-4d9f-be2f-e7129e3643b8\") " pod="openshift-marketplace/community-operators-tbgcq" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.576817 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2q8d8" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.593766 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-ldwx4" event={"ID":"dababdca-8afb-452f-865f-54de3aec21d9","Type":"ContainerStarted","Data":"5adcb8cefb95d4673c319746a905e8db0486fe17bfcd7800342363e85130ebad"} Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.595566 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fztfm" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.601133 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p26dp" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.622663 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:08 crc kubenswrapper[5120]: E0122 11:50:08.623547 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:09.123531308 +0000 UTC m=+143.867479649 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.724467 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:08 crc kubenswrapper[5120]: E0122 11:50:08.726099 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:09.22607784 +0000 UTC m=+143.970026181 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.736176 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tbgcq" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.826376 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:08 crc kubenswrapper[5120]: E0122 11:50:08.827036 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:09.327001903 +0000 UTC m=+144.070950244 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.851242 5120 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-7x2rm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 11:50:08 crc kubenswrapper[5120]: [-]has-synced failed: reason withheld Jan 22 11:50:08 crc kubenswrapper[5120]: [+]process-running ok Jan 22 11:50:08 crc kubenswrapper[5120]: healthz check failed Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.851510 5120 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-7x2rm" podUID="5e1bcfb8-8fae-4947-a078-c38b69596998" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.931035 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:08 crc kubenswrapper[5120]: E0122 11:50:08.931350 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:09.431337499 +0000 UTC m=+144.175285840 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.032274 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:09 crc kubenswrapper[5120]: E0122 11:50:09.032643 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:09.532626631 +0000 UTC m=+144.276574962 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.133800 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:09 crc kubenswrapper[5120]: E0122 11:50:09.134260 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:09.63423849 +0000 UTC m=+144.378186831 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.240918 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:09 crc kubenswrapper[5120]: E0122 11:50:09.241058 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:09.741033266 +0000 UTC m=+144.484981607 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.241453 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:09 crc kubenswrapper[5120]: E0122 11:50:09.241851 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:09.741839455 +0000 UTC m=+144.485787796 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.351445 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:09 crc kubenswrapper[5120]: E0122 11:50:09.352025 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:09.851999443 +0000 UTC m=+144.595947784 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.372776 5120 patch_prober.go:28] interesting pod/downloads-747b44746d-btnnz container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.373232 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-btnnz" podUID="a1372d1c-9557-4da9-b571-ea78602f491f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.454475 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:09 crc kubenswrapper[5120]: E0122 11:50:09.455437 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:09.955405026 +0000 UTC m=+144.699353367 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.556732 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:09 crc kubenswrapper[5120]: E0122 11:50:09.557166 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:10.057149848 +0000 UTC m=+144.801098189 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.631881 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-ldwx4" event={"ID":"dababdca-8afb-452f-865f-54de3aec21d9","Type":"ContainerStarted","Data":"d22a429ee45feef375aced9d7691d9985386b7a0d534582318023895a48f3b59"} Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.670986 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:09 crc kubenswrapper[5120]: E0122 11:50:09.673527 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:10.173502686 +0000 UTC m=+144.917451027 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.696769 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-ldwx4" podStartSLOduration=124.696733158 podStartE2EDuration="2m4.696733158s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:09.677262466 +0000 UTC m=+144.421210797" watchObservedRunningTime="2026-01-22 11:50:09.696733158 +0000 UTC m=+144.440681499" Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.698769 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rp8qf"] Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.712330 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rp8qf" Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.720419 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.725830 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rp8qf"] Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.772934 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.773307 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzfgd\" (UniqueName: \"kubernetes.io/projected/316646c5-1898-417a-8bd7-00eeadfe1243-kube-api-access-kzfgd\") pod \"redhat-marketplace-rp8qf\" (UID: \"316646c5-1898-417a-8bd7-00eeadfe1243\") " pod="openshift-marketplace/redhat-marketplace-rp8qf" Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.773408 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/316646c5-1898-417a-8bd7-00eeadfe1243-catalog-content\") pod \"redhat-marketplace-rp8qf\" (UID: \"316646c5-1898-417a-8bd7-00eeadfe1243\") " pod="openshift-marketplace/redhat-marketplace-rp8qf" Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.773465 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/316646c5-1898-417a-8bd7-00eeadfe1243-utilities\") pod \"redhat-marketplace-rp8qf\" (UID: \"316646c5-1898-417a-8bd7-00eeadfe1243\") " pod="openshift-marketplace/redhat-marketplace-rp8qf" Jan 22 11:50:09 crc kubenswrapper[5120]: E0122 11:50:09.773611 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:10.273582208 +0000 UTC m=+145.017530559 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.815583 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-64d44f6ddf-7q8jr" Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.815633 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/console-64d44f6ddf-7q8jr" Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.820819 5120 patch_prober.go:28] interesting pod/console-64d44f6ddf-7q8jr container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.25:8443/health\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.820922 5120 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-7q8jr" podUID="efec95f9-a526-41f9-bd7c-0d1bd2505eda" containerName="console" probeResult="failure" output="Get \"https://10.217.0.25:8443/health\": dial tcp 10.217.0.25:8443: connect: connection refused" Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.834800 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ingress/router-default-68cf44c8b8-7x2rm" Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.839127 5120 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-7x2rm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 11:50:09 crc kubenswrapper[5120]: [-]has-synced failed: reason withheld Jan 22 11:50:09 crc kubenswrapper[5120]: [+]process-running ok Jan 22 11:50:09 crc kubenswrapper[5120]: healthz check failed Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.839205 5120 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-7x2rm" podUID="5e1bcfb8-8fae-4947-a078-c38b69596998" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.846223 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fztfm"] Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.875331 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/316646c5-1898-417a-8bd7-00eeadfe1243-utilities\") pod \"redhat-marketplace-rp8qf\" (UID: \"316646c5-1898-417a-8bd7-00eeadfe1243\") " pod="openshift-marketplace/redhat-marketplace-rp8qf" Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.875445 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kzfgd\" (UniqueName: \"kubernetes.io/projected/316646c5-1898-417a-8bd7-00eeadfe1243-kube-api-access-kzfgd\") pod \"redhat-marketplace-rp8qf\" (UID: \"316646c5-1898-417a-8bd7-00eeadfe1243\") " pod="openshift-marketplace/redhat-marketplace-rp8qf" Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.875612 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/316646c5-1898-417a-8bd7-00eeadfe1243-catalog-content\") pod \"redhat-marketplace-rp8qf\" (UID: \"316646c5-1898-417a-8bd7-00eeadfe1243\") " pod="openshift-marketplace/redhat-marketplace-rp8qf" Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.875708 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.877184 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/316646c5-1898-417a-8bd7-00eeadfe1243-utilities\") pod \"redhat-marketplace-rp8qf\" (UID: \"316646c5-1898-417a-8bd7-00eeadfe1243\") " pod="openshift-marketplace/redhat-marketplace-rp8qf" Jan 22 11:50:09 crc kubenswrapper[5120]: E0122 11:50:09.878294 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:10.378271823 +0000 UTC m=+145.122220334 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.879452 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/316646c5-1898-417a-8bd7-00eeadfe1243-catalog-content\") pod \"redhat-marketplace-rp8qf\" (UID: \"316646c5-1898-417a-8bd7-00eeadfe1243\") " pod="openshift-marketplace/redhat-marketplace-rp8qf" Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.913023 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tbgcq"] Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.947200 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p26dp"] Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.948862 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzfgd\" (UniqueName: \"kubernetes.io/projected/316646c5-1898-417a-8bd7-00eeadfe1243-kube-api-access-kzfgd\") pod \"redhat-marketplace-rp8qf\" (UID: \"316646c5-1898-417a-8bd7-00eeadfe1243\") " pod="openshift-marketplace/redhat-marketplace-rp8qf" Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.972949 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2q8d8"] Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.976967 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:09 crc kubenswrapper[5120]: E0122 11:50:09.977106 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:10.477073994 +0000 UTC m=+145.221022335 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.977534 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:09 crc kubenswrapper[5120]: E0122 11:50:09.978043 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:10.478025077 +0000 UTC m=+145.221973418 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.039092 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rp8qf" Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.081738 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:10 crc kubenswrapper[5120]: E0122 11:50:10.082152 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:10.582135627 +0000 UTC m=+145.326083968 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.089739 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-z5nvn"] Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.102503 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z5nvn" Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.102842 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z5nvn"] Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.152705 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-g489w" Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.184706 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a52d1c0-c55c-47b4-936e-a783304a0e89-utilities\") pod \"redhat-marketplace-z5nvn\" (UID: \"5a52d1c0-c55c-47b4-936e-a783304a0e89\") " pod="openshift-marketplace/redhat-marketplace-z5nvn" Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.184761 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a52d1c0-c55c-47b4-936e-a783304a0e89-catalog-content\") pod \"redhat-marketplace-z5nvn\" (UID: \"5a52d1c0-c55c-47b4-936e-a783304a0e89\") " pod="openshift-marketplace/redhat-marketplace-z5nvn" Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.184807 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ctgj\" (UniqueName: \"kubernetes.io/projected/5a52d1c0-c55c-47b4-936e-a783304a0e89-kube-api-access-2ctgj\") pod \"redhat-marketplace-z5nvn\" (UID: \"5a52d1c0-c55c-47b4-936e-a783304a0e89\") " pod="openshift-marketplace/redhat-marketplace-z5nvn" Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.184841 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:10 crc kubenswrapper[5120]: E0122 11:50:10.185230 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:10.685213153 +0000 UTC m=+145.429161494 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.286733 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.286946 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2667e960-0d1a-4c78-97ea-b1852f27ce17-config-volume\") pod \"2667e960-0d1a-4c78-97ea-b1852f27ce17\" (UID: \"2667e960-0d1a-4c78-97ea-b1852f27ce17\") " Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.287030 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kl2wm\" (UniqueName: \"kubernetes.io/projected/2667e960-0d1a-4c78-97ea-b1852f27ce17-kube-api-access-kl2wm\") pod \"2667e960-0d1a-4c78-97ea-b1852f27ce17\" (UID: \"2667e960-0d1a-4c78-97ea-b1852f27ce17\") " Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.287181 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2667e960-0d1a-4c78-97ea-b1852f27ce17-secret-volume\") pod \"2667e960-0d1a-4c78-97ea-b1852f27ce17\" (UID: \"2667e960-0d1a-4c78-97ea-b1852f27ce17\") " Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.287393 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a52d1c0-c55c-47b4-936e-a783304a0e89-utilities\") pod \"redhat-marketplace-z5nvn\" (UID: \"5a52d1c0-c55c-47b4-936e-a783304a0e89\") " pod="openshift-marketplace/redhat-marketplace-z5nvn" Jan 22 11:50:10 crc kubenswrapper[5120]: E0122 11:50:10.287615 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:10.787399347 +0000 UTC m=+145.531347688 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.287843 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a52d1c0-c55c-47b4-936e-a783304a0e89-catalog-content\") pod \"redhat-marketplace-z5nvn\" (UID: \"5a52d1c0-c55c-47b4-936e-a783304a0e89\") " pod="openshift-marketplace/redhat-marketplace-z5nvn" Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.287907 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a52d1c0-c55c-47b4-936e-a783304a0e89-utilities\") pod \"redhat-marketplace-z5nvn\" (UID: \"5a52d1c0-c55c-47b4-936e-a783304a0e89\") " pod="openshift-marketplace/redhat-marketplace-z5nvn" Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.288285 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2ctgj\" (UniqueName: \"kubernetes.io/projected/5a52d1c0-c55c-47b4-936e-a783304a0e89-kube-api-access-2ctgj\") pod \"redhat-marketplace-z5nvn\" (UID: \"5a52d1c0-c55c-47b4-936e-a783304a0e89\") " pod="openshift-marketplace/redhat-marketplace-z5nvn" Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.288608 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.288768 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a52d1c0-c55c-47b4-936e-a783304a0e89-catalog-content\") pod \"redhat-marketplace-z5nvn\" (UID: \"5a52d1c0-c55c-47b4-936e-a783304a0e89\") " pod="openshift-marketplace/redhat-marketplace-z5nvn" Jan 22 11:50:10 crc kubenswrapper[5120]: E0122 11:50:10.289121 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:10.789112848 +0000 UTC m=+145.533061189 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.291220 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2667e960-0d1a-4c78-97ea-b1852f27ce17-config-volume" (OuterVolumeSpecName: "config-volume") pod "2667e960-0d1a-4c78-97ea-b1852f27ce17" (UID: "2667e960-0d1a-4c78-97ea-b1852f27ce17"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.303229 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2667e960-0d1a-4c78-97ea-b1852f27ce17-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2667e960-0d1a-4c78-97ea-b1852f27ce17" (UID: "2667e960-0d1a-4c78-97ea-b1852f27ce17"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.318434 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2667e960-0d1a-4c78-97ea-b1852f27ce17-kube-api-access-kl2wm" (OuterVolumeSpecName: "kube-api-access-kl2wm") pod "2667e960-0d1a-4c78-97ea-b1852f27ce17" (UID: "2667e960-0d1a-4c78-97ea-b1852f27ce17"). InnerVolumeSpecName "kube-api-access-kl2wm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.319528 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ctgj\" (UniqueName: \"kubernetes.io/projected/5a52d1c0-c55c-47b4-936e-a783304a0e89-kube-api-access-2ctgj\") pod \"redhat-marketplace-z5nvn\" (UID: \"5a52d1c0-c55c-47b4-936e-a783304a0e89\") " pod="openshift-marketplace/redhat-marketplace-z5nvn" Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.391747 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:10 crc kubenswrapper[5120]: E0122 11:50:10.392022 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:10.891951208 +0000 UTC m=+145.635899549 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.392433 5120 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2667e960-0d1a-4c78-97ea-b1852f27ce17-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.392462 5120 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2667e960-0d1a-4c78-97ea-b1852f27ce17-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.392474 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kl2wm\" (UniqueName: \"kubernetes.io/projected/2667e960-0d1a-4c78-97ea-b1852f27ce17-kube-api-access-kl2wm\") on node \"crc\" DevicePath \"\"" Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.412894 5120 ???:1] "http: TLS handshake error from 192.168.126.11:49984: no serving certificate available for the kubelet" Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.419038 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z5nvn" Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.471341 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.471973 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2667e960-0d1a-4c78-97ea-b1852f27ce17" containerName="collect-profiles" Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.471991 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="2667e960-0d1a-4c78-97ea-b1852f27ce17" containerName="collect-profiles" Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.472091 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="2667e960-0d1a-4c78-97ea-b1852f27ce17" containerName="collect-profiles" Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.480978 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.481582 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.485092 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler\"/\"kube-root-ca.crt\"" Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.485420 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler\"/\"installer-sa-dockercfg-qpkss\"" Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.493622 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:10 crc kubenswrapper[5120]: E0122 11:50:10.494031 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:10.994015969 +0000 UTC m=+145.737964310 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.495909 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rp8qf"] Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.596908 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.597184 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/20ed0804-5c2e-4054-a7af-c90d2103aacb-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"20ed0804-5c2e-4054-a7af-c90d2103aacb\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.597286 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/20ed0804-5c2e-4054-a7af-c90d2103aacb-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"20ed0804-5c2e-4054-a7af-c90d2103aacb\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 22 11:50:10 crc kubenswrapper[5120]: E0122 11:50:10.597412 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:11.097395971 +0000 UTC m=+145.841344312 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.687297 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-g489w" event={"ID":"2667e960-0d1a-4c78-97ea-b1852f27ce17","Type":"ContainerDied","Data":"d4824bab9e53014c1adf60d5f2c167746888e2b25de0388cf1bcad99ffd70500"} Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.688477 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4824bab9e53014c1adf60d5f2c167746888e2b25de0388cf1bcad99ffd70500" Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.688671 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-g489w" Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.694641 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rp8qf" event={"ID":"316646c5-1898-417a-8bd7-00eeadfe1243","Type":"ContainerStarted","Data":"b88cdc87cf3e9924bb751ee1a18fd60cd70c52d60437b53a435f731721d1f00b"} Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.701819 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/20ed0804-5c2e-4054-a7af-c90d2103aacb-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"20ed0804-5c2e-4054-a7af-c90d2103aacb\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.702063 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.702353 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/20ed0804-5c2e-4054-a7af-c90d2103aacb-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"20ed0804-5c2e-4054-a7af-c90d2103aacb\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.702937 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/20ed0804-5c2e-4054-a7af-c90d2103aacb-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"20ed0804-5c2e-4054-a7af-c90d2103aacb\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 22 11:50:10 crc kubenswrapper[5120]: E0122 11:50:10.703246 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:11.203232984 +0000 UTC m=+145.947181325 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.734122 5120 generic.go:358] "Generic (PLEG): container finished" podID="3e95505c-a7eb-4d9f-be2f-e7129e3643b8" containerID="7de27767f0a768c4d8be8f2a9463a108ad7455645c4ac170a6ce680c9ed560d4" exitCode=0 Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.734563 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tbgcq" event={"ID":"3e95505c-a7eb-4d9f-be2f-e7129e3643b8","Type":"ContainerDied","Data":"7de27767f0a768c4d8be8f2a9463a108ad7455645c4ac170a6ce680c9ed560d4"} Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.734706 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tbgcq" event={"ID":"3e95505c-a7eb-4d9f-be2f-e7129e3643b8","Type":"ContainerStarted","Data":"d7e449df56d4aa55bd535980c4c65253f3325cde543e24f2634b3227e292a791"} Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.740844 5120 generic.go:358] "Generic (PLEG): container finished" podID="089fc2c1-8274-4532-a14a-21194d01a310" containerID="8c8add6d6346bffb920d193189f09708f0ce72391c85a3b8f9fe5d165b2e4b5d" exitCode=0 Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.741009 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p26dp" event={"ID":"089fc2c1-8274-4532-a14a-21194d01a310","Type":"ContainerDied","Data":"8c8add6d6346bffb920d193189f09708f0ce72391c85a3b8f9fe5d165b2e4b5d"} Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.741047 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p26dp" event={"ID":"089fc2c1-8274-4532-a14a-21194d01a310","Type":"ContainerStarted","Data":"408feb4598d3b1d5ae322e87417dab316fa1b75c632f7ace01cbd6d89c0b3941"} Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.755194 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/20ed0804-5c2e-4054-a7af-c90d2103aacb-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"20ed0804-5c2e-4054-a7af-c90d2103aacb\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.765928 5120 generic.go:358] "Generic (PLEG): container finished" podID="4f669e70-10cd-47da-abc9-84be80cb5cfb" containerID="0a6ff4df62b5c4da4557f4c5e8baed180b5153d309f28b63bc73b55557f599b5" exitCode=0 Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.766204 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fztfm" event={"ID":"4f669e70-10cd-47da-abc9-84be80cb5cfb","Type":"ContainerDied","Data":"0a6ff4df62b5c4da4557f4c5e8baed180b5153d309f28b63bc73b55557f599b5"} Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.766251 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fztfm" event={"ID":"4f669e70-10cd-47da-abc9-84be80cb5cfb","Type":"ContainerStarted","Data":"942f286364f00775972ff57ef7ee9a1b6d83531d392b957342335e79a3c8a683"} Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.774979 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-lsqq6" event={"ID":"d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2","Type":"ContainerStarted","Data":"2f538daad7777bd0dc15f7e658704af0591513bbc56f30de7eaeb6e9ec113474"} Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.778122 5120 generic.go:358] "Generic (PLEG): container finished" podID="ed489f01-1188-4d6f-9ed4-9618fddf1eab" containerID="dc207be41a00ceee7de3c6651059410a76c90a309847c28dd6606649dc8328a3" exitCode=0 Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.779923 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2q8d8" event={"ID":"ed489f01-1188-4d6f-9ed4-9618fddf1eab","Type":"ContainerDied","Data":"dc207be41a00ceee7de3c6651059410a76c90a309847c28dd6606649dc8328a3"} Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.779974 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2q8d8" event={"ID":"ed489f01-1188-4d6f-9ed4-9618fddf1eab","Type":"ContainerStarted","Data":"1b3c4ff9732c93011b494f79b9052c81bdd854fe832d0d1aff9714069c08086b"} Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.805139 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:10 crc kubenswrapper[5120]: E0122 11:50:10.806644 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:11.306618707 +0000 UTC m=+146.050567048 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.813343 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z5nvn"] Jan 22 11:50:10 crc kubenswrapper[5120]: W0122 11:50:10.840574 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a52d1c0_c55c_47b4_936e_a783304a0e89.slice/crio-78c9a69e1fa99c2e87a7582c593ca2b6cefde510daa7b05fc0d9db0261917a2a WatchSource:0}: Error finding container 78c9a69e1fa99c2e87a7582c593ca2b6cefde510daa7b05fc0d9db0261917a2a: Status 404 returned error can't find the container with id 78c9a69e1fa99c2e87a7582c593ca2b6cefde510daa7b05fc0d9db0261917a2a Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.848831 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-68cf44c8b8-7x2rm" Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.853813 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-68cf44c8b8-7x2rm" Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.890754 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-t67f7"] Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.908643 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:10 crc kubenswrapper[5120]: E0122 11:50:10.909525 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:11.409507337 +0000 UTC m=+146.153455678 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.941586 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.011694 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:11 crc kubenswrapper[5120]: E0122 11:50:11.012084 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:11.51205851 +0000 UTC m=+146.256006841 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.113847 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:11 crc kubenswrapper[5120]: E0122 11:50:11.114332 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:11.614312536 +0000 UTC m=+146.358260877 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.213559 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t67f7"] Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.213716 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t67f7" Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.214517 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:11 crc kubenswrapper[5120]: E0122 11:50:11.214921 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:11.7149048 +0000 UTC m=+146.458853141 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.222489 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.315984 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b-utilities\") pod \"redhat-operators-t67f7\" (UID: \"df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b\") " pod="openshift-marketplace/redhat-operators-t67f7" Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.316066 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.316094 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsv5h\" (UniqueName: \"kubernetes.io/projected/df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b-kube-api-access-lsv5h\") pod \"redhat-operators-t67f7\" (UID: \"df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b\") " pod="openshift-marketplace/redhat-operators-t67f7" Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.316120 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b-catalog-content\") pod \"redhat-operators-t67f7\" (UID: \"df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b\") " pod="openshift-marketplace/redhat-operators-t67f7" Jan 22 11:50:11 crc kubenswrapper[5120]: E0122 11:50:11.316490 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:11.816470059 +0000 UTC m=+146.560418400 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.335550 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mbm7w"] Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.362748 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mbm7w"] Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.362994 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mbm7w" Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.364254 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.417714 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:11 crc kubenswrapper[5120]: E0122 11:50:11.418373 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:11.918338216 +0000 UTC m=+146.662286567 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.418715 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b-utilities\") pod \"redhat-operators-t67f7\" (UID: \"df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b\") " pod="openshift-marketplace/redhat-operators-t67f7" Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.418759 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.418779 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lsv5h\" (UniqueName: \"kubernetes.io/projected/df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b-kube-api-access-lsv5h\") pod \"redhat-operators-t67f7\" (UID: \"df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b\") " pod="openshift-marketplace/redhat-operators-t67f7" Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.418803 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b-catalog-content\") pod \"redhat-operators-t67f7\" (UID: \"df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b\") " pod="openshift-marketplace/redhat-operators-t67f7" Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.418900 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fda19cab-4c2e-47a2-993c-ce6f3795e561-utilities\") pod \"redhat-operators-mbm7w\" (UID: \"fda19cab-4c2e-47a2-993c-ce6f3795e561\") " pod="openshift-marketplace/redhat-operators-mbm7w" Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.418922 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmxxj\" (UniqueName: \"kubernetes.io/projected/fda19cab-4c2e-47a2-993c-ce6f3795e561-kube-api-access-lmxxj\") pod \"redhat-operators-mbm7w\" (UID: \"fda19cab-4c2e-47a2-993c-ce6f3795e561\") " pod="openshift-marketplace/redhat-operators-mbm7w" Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.418970 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fda19cab-4c2e-47a2-993c-ce6f3795e561-catalog-content\") pod \"redhat-operators-mbm7w\" (UID: \"fda19cab-4c2e-47a2-993c-ce6f3795e561\") " pod="openshift-marketplace/redhat-operators-mbm7w" Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.419462 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b-utilities\") pod \"redhat-operators-t67f7\" (UID: \"df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b\") " pod="openshift-marketplace/redhat-operators-t67f7" Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.419695 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b-catalog-content\") pod \"redhat-operators-t67f7\" (UID: \"df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b\") " pod="openshift-marketplace/redhat-operators-t67f7" Jan 22 11:50:11 crc kubenswrapper[5120]: E0122 11:50:11.419728 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:11.919716109 +0000 UTC m=+146.663664610 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.444682 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsv5h\" (UniqueName: \"kubernetes.io/projected/df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b-kube-api-access-lsv5h\") pod \"redhat-operators-t67f7\" (UID: \"df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b\") " pod="openshift-marketplace/redhat-operators-t67f7" Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.520438 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:11 crc kubenswrapper[5120]: E0122 11:50:11.520878 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:12.020858017 +0000 UTC m=+146.764806358 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.520951 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fda19cab-4c2e-47a2-993c-ce6f3795e561-utilities\") pod \"redhat-operators-mbm7w\" (UID: \"fda19cab-4c2e-47a2-993c-ce6f3795e561\") " pod="openshift-marketplace/redhat-operators-mbm7w" Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.521003 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lmxxj\" (UniqueName: \"kubernetes.io/projected/fda19cab-4c2e-47a2-993c-ce6f3795e561-kube-api-access-lmxxj\") pod \"redhat-operators-mbm7w\" (UID: \"fda19cab-4c2e-47a2-993c-ce6f3795e561\") " pod="openshift-marketplace/redhat-operators-mbm7w" Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.521030 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fda19cab-4c2e-47a2-993c-ce6f3795e561-catalog-content\") pod \"redhat-operators-mbm7w\" (UID: \"fda19cab-4c2e-47a2-993c-ce6f3795e561\") " pod="openshift-marketplace/redhat-operators-mbm7w" Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.521649 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fda19cab-4c2e-47a2-993c-ce6f3795e561-catalog-content\") pod \"redhat-operators-mbm7w\" (UID: \"fda19cab-4c2e-47a2-993c-ce6f3795e561\") " pod="openshift-marketplace/redhat-operators-mbm7w" Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.522478 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fda19cab-4c2e-47a2-993c-ce6f3795e561-utilities\") pod \"redhat-operators-mbm7w\" (UID: \"fda19cab-4c2e-47a2-993c-ce6f3795e561\") " pod="openshift-marketplace/redhat-operators-mbm7w" Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.550228 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t67f7" Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.558640 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmxxj\" (UniqueName: \"kubernetes.io/projected/fda19cab-4c2e-47a2-993c-ce6f3795e561-kube-api-access-lmxxj\") pod \"redhat-operators-mbm7w\" (UID: \"fda19cab-4c2e-47a2-993c-ce6f3795e561\") " pod="openshift-marketplace/redhat-operators-mbm7w" Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.624606 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:11 crc kubenswrapper[5120]: E0122 11:50:11.624660 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:12.124640699 +0000 UTC m=+146.868589030 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.725611 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:11 crc kubenswrapper[5120]: E0122 11:50:11.726046 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:12.226028825 +0000 UTC m=+146.969977166 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.805643 5120 generic.go:358] "Generic (PLEG): container finished" podID="5a52d1c0-c55c-47b4-936e-a783304a0e89" containerID="f7fd7cbfe79a1adebb0cfbd3dc66028444cc6622806f14ca6c6694184f1c03cf" exitCode=0 Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.805789 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z5nvn" event={"ID":"5a52d1c0-c55c-47b4-936e-a783304a0e89","Type":"ContainerDied","Data":"f7fd7cbfe79a1adebb0cfbd3dc66028444cc6622806f14ca6c6694184f1c03cf"} Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.805858 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z5nvn" event={"ID":"5a52d1c0-c55c-47b4-936e-a783304a0e89","Type":"ContainerStarted","Data":"78c9a69e1fa99c2e87a7582c593ca2b6cefde510daa7b05fc0d9db0261917a2a"} Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.819837 5120 generic.go:358] "Generic (PLEG): container finished" podID="316646c5-1898-417a-8bd7-00eeadfe1243" containerID="c5a54dd8cce3cf9390074acb6e0b4e6f5774c6d5a39aade6bcee188cb33a4152" exitCode=0 Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.820215 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rp8qf" event={"ID":"316646c5-1898-417a-8bd7-00eeadfe1243","Type":"ContainerDied","Data":"c5a54dd8cce3cf9390074acb6e0b4e6f5774c6d5a39aade6bcee188cb33a4152"} Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.828402 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mbm7w" Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.833841 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:11 crc kubenswrapper[5120]: E0122 11:50:11.834337 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:12.334316116 +0000 UTC m=+147.078264627 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.836207 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"20ed0804-5c2e-4054-a7af-c90d2103aacb","Type":"ContainerStarted","Data":"18125d47d2426af9cc47b5088c7d2ff08b796e103e35e0674e0ef49d47cd98bb"} Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.935030 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:11 crc kubenswrapper[5120]: E0122 11:50:11.935869 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:12.435827012 +0000 UTC m=+147.179775353 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.941478 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:11 crc kubenswrapper[5120]: E0122 11:50:11.943279 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:12.443237653 +0000 UTC m=+147.187186004 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.992923 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t67f7"] Jan 22 11:50:12 crc kubenswrapper[5120]: E0122 11:50:12.027159 5120 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b6626dbcfe2359c8932616225dead34356537fe01ca973f60304e807a266661f" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 22 11:50:12 crc kubenswrapper[5120]: E0122 11:50:12.036191 5120 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b6626dbcfe2359c8932616225dead34356537fe01ca973f60304e807a266661f" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 22 11:50:12 crc kubenswrapper[5120]: I0122 11:50:12.043277 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:12 crc kubenswrapper[5120]: E0122 11:50:12.043555 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:12.5435349 +0000 UTC m=+147.287483241 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:12 crc kubenswrapper[5120]: E0122 11:50:12.046996 5120 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b6626dbcfe2359c8932616225dead34356537fe01ca973f60304e807a266661f" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 22 11:50:12 crc kubenswrapper[5120]: E0122 11:50:12.047066 5120 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-mddkn" podUID="48ce43ae-5f5f-4ae6-91bd-98390a12c650" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 22 11:50:12 crc kubenswrapper[5120]: I0122 11:50:12.159987 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:12 crc kubenswrapper[5120]: E0122 11:50:12.160447 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:12.66042782 +0000 UTC m=+147.404376311 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:12 crc kubenswrapper[5120]: I0122 11:50:12.262067 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:12 crc kubenswrapper[5120]: E0122 11:50:12.262274 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:12.762244994 +0000 UTC m=+147.506193335 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:12 crc kubenswrapper[5120]: I0122 11:50:12.262567 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:12 crc kubenswrapper[5120]: E0122 11:50:12.263180 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:12.763172417 +0000 UTC m=+147.507120758 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:12 crc kubenswrapper[5120]: I0122 11:50:12.335860 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mbm7w"] Jan 22 11:50:12 crc kubenswrapper[5120]: I0122 11:50:12.364627 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:12 crc kubenswrapper[5120]: E0122 11:50:12.365064 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:12.865047374 +0000 UTC m=+147.608995705 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:12 crc kubenswrapper[5120]: I0122 11:50:12.466000 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:12 crc kubenswrapper[5120]: E0122 11:50:12.466442 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:12.966427207 +0000 UTC m=+147.710375548 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:12 crc kubenswrapper[5120]: I0122 11:50:12.567906 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:12 crc kubenswrapper[5120]: E0122 11:50:12.568324 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:13.068307464 +0000 UTC m=+147.812255805 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:12 crc kubenswrapper[5120]: I0122 11:50:12.671677 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:12 crc kubenswrapper[5120]: E0122 11:50:12.672036 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:13.172024245 +0000 UTC m=+147.915972586 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:12 crc kubenswrapper[5120]: I0122 11:50:12.732096 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:50:12 crc kubenswrapper[5120]: I0122 11:50:12.773887 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:12 crc kubenswrapper[5120]: E0122 11:50:12.774278 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:13.27426119 +0000 UTC m=+148.018209531 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:12 crc kubenswrapper[5120]: I0122 11:50:12.854830 5120 generic.go:358] "Generic (PLEG): container finished" podID="20ed0804-5c2e-4054-a7af-c90d2103aacb" containerID="1cd95b44bb4d0252e12b33d01daf7c5bffc97e700eedfc0c02f19f25cf6b8dca" exitCode=0 Jan 22 11:50:12 crc kubenswrapper[5120]: I0122 11:50:12.855029 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"20ed0804-5c2e-4054-a7af-c90d2103aacb","Type":"ContainerDied","Data":"1cd95b44bb4d0252e12b33d01daf7c5bffc97e700eedfc0c02f19f25cf6b8dca"} Jan 22 11:50:12 crc kubenswrapper[5120]: I0122 11:50:12.869228 5120 generic.go:358] "Generic (PLEG): container finished" podID="fda19cab-4c2e-47a2-993c-ce6f3795e561" containerID="225b2e979aa1449106827d89e2af943939a02a67507731955126d01302822780" exitCode=0 Jan 22 11:50:12 crc kubenswrapper[5120]: I0122 11:50:12.869704 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mbm7w" event={"ID":"fda19cab-4c2e-47a2-993c-ce6f3795e561","Type":"ContainerDied","Data":"225b2e979aa1449106827d89e2af943939a02a67507731955126d01302822780"} Jan 22 11:50:12 crc kubenswrapper[5120]: I0122 11:50:12.869748 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mbm7w" event={"ID":"fda19cab-4c2e-47a2-993c-ce6f3795e561","Type":"ContainerStarted","Data":"f088b06a5bed8fcb72cf992ec4dfa09770bed17e70fa6aa78bd0452016efb6e5"} Jan 22 11:50:12 crc kubenswrapper[5120]: I0122 11:50:12.878125 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:12 crc kubenswrapper[5120]: E0122 11:50:12.878651 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:13.378635897 +0000 UTC m=+148.122584238 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:12 crc kubenswrapper[5120]: I0122 11:50:12.883790 5120 generic.go:358] "Generic (PLEG): container finished" podID="df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b" containerID="985bb517b1a5ceb43a9211611e90da3a2637d7edc83728d91f5fb480e9687668" exitCode=0 Jan 22 11:50:12 crc kubenswrapper[5120]: I0122 11:50:12.883885 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t67f7" event={"ID":"df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b","Type":"ContainerDied","Data":"985bb517b1a5ceb43a9211611e90da3a2637d7edc83728d91f5fb480e9687668"} Jan 22 11:50:12 crc kubenswrapper[5120]: I0122 11:50:12.883923 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t67f7" event={"ID":"df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b","Type":"ContainerStarted","Data":"ab803e6a4d6bc8f6c5535f7b6ba4ab7280d0c0d527dc407d8f992ddd6ad5d49c"} Jan 22 11:50:12 crc kubenswrapper[5120]: I0122 11:50:12.980460 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:12 crc kubenswrapper[5120]: E0122 11:50:12.981105 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:13.481089257 +0000 UTC m=+148.225037598 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:13 crc kubenswrapper[5120]: I0122 11:50:13.083607 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:13 crc kubenswrapper[5120]: E0122 11:50:13.083944 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:13.583931646 +0000 UTC m=+148.327879987 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:13 crc kubenswrapper[5120]: I0122 11:50:13.184677 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:13 crc kubenswrapper[5120]: E0122 11:50:13.184911 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:13.684883 +0000 UTC m=+148.428831341 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:13 crc kubenswrapper[5120]: I0122 11:50:13.187068 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:13 crc kubenswrapper[5120]: E0122 11:50:13.187892 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:13.687878253 +0000 UTC m=+148.431826594 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:13 crc kubenswrapper[5120]: I0122 11:50:13.188536 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Jan 22 11:50:13 crc kubenswrapper[5120]: I0122 11:50:13.289041 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:13 crc kubenswrapper[5120]: E0122 11:50:13.289278 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:13.789235927 +0000 UTC m=+148.533184268 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:13 crc kubenswrapper[5120]: I0122 11:50:13.290158 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:13 crc kubenswrapper[5120]: E0122 11:50:13.290846 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:13.790833965 +0000 UTC m=+148.534782306 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:13 crc kubenswrapper[5120]: I0122 11:50:13.398012 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:13 crc kubenswrapper[5120]: E0122 11:50:13.398433 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:13.898403659 +0000 UTC m=+148.642352000 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:13 crc kubenswrapper[5120]: I0122 11:50:13.501062 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:13 crc kubenswrapper[5120]: E0122 11:50:13.501445 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:14.001428974 +0000 UTC m=+148.745377315 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:13 crc kubenswrapper[5120]: I0122 11:50:13.602380 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:13 crc kubenswrapper[5120]: E0122 11:50:13.602762 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:14.102722746 +0000 UTC m=+148.846671087 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:13 crc kubenswrapper[5120]: I0122 11:50:13.704293 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:13 crc kubenswrapper[5120]: E0122 11:50:13.704778 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:14.204753206 +0000 UTC m=+148.948701547 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:13 crc kubenswrapper[5120]: I0122 11:50:13.805914 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:13 crc kubenswrapper[5120]: E0122 11:50:13.806185 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:14.30613163 +0000 UTC m=+149.050079971 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:13 crc kubenswrapper[5120]: I0122 11:50:13.807539 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:13 crc kubenswrapper[5120]: E0122 11:50:13.808256 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:14.308235511 +0000 UTC m=+149.052183852 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:13 crc kubenswrapper[5120]: I0122 11:50:13.811484 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 22 11:50:13 crc kubenswrapper[5120]: I0122 11:50:13.816414 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Jan 22 11:50:13 crc kubenswrapper[5120]: I0122 11:50:13.816503 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Jan 22 11:50:13 crc kubenswrapper[5120]: I0122 11:50:13.830732 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Jan 22 11:50:13 crc kubenswrapper[5120]: I0122 11:50:13.892370 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-lsqq6" event={"ID":"d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2","Type":"ContainerStarted","Data":"ff77e9e03e96e6345d93dc85455d6e2c23cacd600f28bb808b09581d7fc1076a"} Jan 22 11:50:13 crc kubenswrapper[5120]: I0122 11:50:13.928256 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:13 crc kubenswrapper[5120]: E0122 11:50:13.928401 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:14.428372529 +0000 UTC m=+149.172320870 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:13 crc kubenswrapper[5120]: I0122 11:50:13.929319 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:13 crc kubenswrapper[5120]: I0122 11:50:13.929485 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1144df8b-88aa-4dd2-9b2c-ba41340bed9f-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"1144df8b-88aa-4dd2-9b2c-ba41340bed9f\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 22 11:50:13 crc kubenswrapper[5120]: I0122 11:50:13.929716 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1144df8b-88aa-4dd2-9b2c-ba41340bed9f-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"1144df8b-88aa-4dd2-9b2c-ba41340bed9f\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 22 11:50:13 crc kubenswrapper[5120]: E0122 11:50:13.929732 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:14.429709091 +0000 UTC m=+149.173657432 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:14 crc kubenswrapper[5120]: I0122 11:50:14.031052 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:14 crc kubenswrapper[5120]: E0122 11:50:14.031343 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:14.531312211 +0000 UTC m=+149.275260552 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:14 crc kubenswrapper[5120]: I0122 11:50:14.031694 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:14 crc kubenswrapper[5120]: I0122 11:50:14.031747 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1144df8b-88aa-4dd2-9b2c-ba41340bed9f-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"1144df8b-88aa-4dd2-9b2c-ba41340bed9f\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 22 11:50:14 crc kubenswrapper[5120]: I0122 11:50:14.031825 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1144df8b-88aa-4dd2-9b2c-ba41340bed9f-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"1144df8b-88aa-4dd2-9b2c-ba41340bed9f\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 22 11:50:14 crc kubenswrapper[5120]: I0122 11:50:14.032018 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1144df8b-88aa-4dd2-9b2c-ba41340bed9f-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"1144df8b-88aa-4dd2-9b2c-ba41340bed9f\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 22 11:50:14 crc kubenswrapper[5120]: E0122 11:50:14.032420 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:14.532393587 +0000 UTC m=+149.276342098 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:14 crc kubenswrapper[5120]: I0122 11:50:14.056351 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1144df8b-88aa-4dd2-9b2c-ba41340bed9f-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"1144df8b-88aa-4dd2-9b2c-ba41340bed9f\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 22 11:50:14 crc kubenswrapper[5120]: I0122 11:50:14.133380 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:14 crc kubenswrapper[5120]: E0122 11:50:14.133670 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:14.633624628 +0000 UTC m=+149.377572969 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:14 crc kubenswrapper[5120]: I0122 11:50:14.133772 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:14 crc kubenswrapper[5120]: E0122 11:50:14.134307 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:14.634287894 +0000 UTC m=+149.378236235 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:14 crc kubenswrapper[5120]: I0122 11:50:14.140948 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 22 11:50:14 crc kubenswrapper[5120]: I0122 11:50:14.227239 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 22 11:50:14 crc kubenswrapper[5120]: I0122 11:50:14.235051 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:14 crc kubenswrapper[5120]: E0122 11:50:14.235240 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:14.735212298 +0000 UTC m=+149.479160629 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:14 crc kubenswrapper[5120]: I0122 11:50:14.235407 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/20ed0804-5c2e-4054-a7af-c90d2103aacb-kubelet-dir\") pod \"20ed0804-5c2e-4054-a7af-c90d2103aacb\" (UID: \"20ed0804-5c2e-4054-a7af-c90d2103aacb\") " Jan 22 11:50:14 crc kubenswrapper[5120]: I0122 11:50:14.235494 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/20ed0804-5c2e-4054-a7af-c90d2103aacb-kube-api-access\") pod \"20ed0804-5c2e-4054-a7af-c90d2103aacb\" (UID: \"20ed0804-5c2e-4054-a7af-c90d2103aacb\") " Jan 22 11:50:14 crc kubenswrapper[5120]: I0122 11:50:14.235506 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20ed0804-5c2e-4054-a7af-c90d2103aacb-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "20ed0804-5c2e-4054-a7af-c90d2103aacb" (UID: "20ed0804-5c2e-4054-a7af-c90d2103aacb"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 11:50:14 crc kubenswrapper[5120]: I0122 11:50:14.236019 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:14 crc kubenswrapper[5120]: I0122 11:50:14.236138 5120 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/20ed0804-5c2e-4054-a7af-c90d2103aacb-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 22 11:50:14 crc kubenswrapper[5120]: E0122 11:50:14.236403 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:14.736375335 +0000 UTC m=+149.480323676 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:14 crc kubenswrapper[5120]: I0122 11:50:14.271899 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ed0804-5c2e-4054-a7af-c90d2103aacb-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "20ed0804-5c2e-4054-a7af-c90d2103aacb" (UID: "20ed0804-5c2e-4054-a7af-c90d2103aacb"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:50:14 crc kubenswrapper[5120]: I0122 11:50:14.337920 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:14 crc kubenswrapper[5120]: E0122 11:50:14.338077 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:14.838047097 +0000 UTC m=+149.581995438 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:14 crc kubenswrapper[5120]: I0122 11:50:14.338266 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:14 crc kubenswrapper[5120]: I0122 11:50:14.338440 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/20ed0804-5c2e-4054-a7af-c90d2103aacb-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 11:50:14 crc kubenswrapper[5120]: E0122 11:50:14.338888 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:14.838879097 +0000 UTC m=+149.582827428 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:14 crc kubenswrapper[5120]: I0122 11:50:14.439998 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:14 crc kubenswrapper[5120]: E0122 11:50:14.440272 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:14.94021386 +0000 UTC m=+149.684162201 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:14 crc kubenswrapper[5120]: I0122 11:50:14.440918 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:14 crc kubenswrapper[5120]: E0122 11:50:14.441580 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:14.941570753 +0000 UTC m=+149.685519084 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:14 crc kubenswrapper[5120]: I0122 11:50:14.476838 5120 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 22 11:50:14 crc kubenswrapper[5120]: I0122 11:50:14.534837 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Jan 22 11:50:14 crc kubenswrapper[5120]: I0122 11:50:14.541872 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:14 crc kubenswrapper[5120]: E0122 11:50:14.542138 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:15.042114607 +0000 UTC m=+149.786062948 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:14 crc kubenswrapper[5120]: I0122 11:50:14.644249 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:14 crc kubenswrapper[5120]: E0122 11:50:14.645475 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:15.145414358 +0000 UTC m=+149.889362699 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:14 crc kubenswrapper[5120]: I0122 11:50:14.744733 5120 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-22T11:50:14.476895218Z","UUID":"c7e8900c-100e-4568-826d-b82a525ec5a2","Handler":null,"Name":"","Endpoint":""} Jan 22 11:50:14 crc kubenswrapper[5120]: I0122 11:50:14.756356 5120 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 22 11:50:14 crc kubenswrapper[5120]: I0122 11:50:14.756409 5120 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 22 11:50:14 crc kubenswrapper[5120]: I0122 11:50:14.756850 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:14 crc kubenswrapper[5120]: I0122 11:50:14.761532 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Jan 22 11:50:14 crc kubenswrapper[5120]: I0122 11:50:14.858799 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:14 crc kubenswrapper[5120]: I0122 11:50:14.862763 5120 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 22 11:50:14 crc kubenswrapper[5120]: I0122 11:50:14.862817 5120 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount\"" pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:14 crc kubenswrapper[5120]: I0122 11:50:14.892008 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:14 crc kubenswrapper[5120]: I0122 11:50:14.904677 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"1144df8b-88aa-4dd2-9b2c-ba41340bed9f","Type":"ContainerStarted","Data":"f6539fac927736fe00ed8becb89b97e99fa82f09b5f1b989a5e7d7d1eb99b316"} Jan 22 11:50:14 crc kubenswrapper[5120]: I0122 11:50:14.907769 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"20ed0804-5c2e-4054-a7af-c90d2103aacb","Type":"ContainerDied","Data":"18125d47d2426af9cc47b5088c7d2ff08b796e103e35e0674e0ef49d47cd98bb"} Jan 22 11:50:14 crc kubenswrapper[5120]: I0122 11:50:14.907826 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18125d47d2426af9cc47b5088c7d2ff08b796e103e35e0674e0ef49d47cd98bb" Jan 22 11:50:14 crc kubenswrapper[5120]: I0122 11:50:14.907839 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 22 11:50:15 crc kubenswrapper[5120]: I0122 11:50:15.080612 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:15 crc kubenswrapper[5120]: I0122 11:50:15.347430 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-49gkx"] Jan 22 11:50:15 crc kubenswrapper[5120]: I0122 11:50:15.595488 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e9b5059-1b3e-4067-a63d-2952cbe863af" path="/var/lib/kubelet/pods/9e9b5059-1b3e-4067-a63d-2952cbe863af/volumes" Jan 22 11:50:15 crc kubenswrapper[5120]: I0122 11:50:15.937785 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"1144df8b-88aa-4dd2-9b2c-ba41340bed9f","Type":"ContainerStarted","Data":"da7f6775170b711deee1d912dd17a150e8dc85403363664ea0cfd6e6d2a35197"} Jan 22 11:50:15 crc kubenswrapper[5120]: I0122 11:50:15.941799 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-49gkx" event={"ID":"e16334d5-3fa8-48de-a8e0-af1f9fa51926","Type":"ContainerStarted","Data":"30738daefd26ec1936e210196218667fac004e9fbe6021d4a2265a6c692aabac"} Jan 22 11:50:15 crc kubenswrapper[5120]: I0122 11:50:15.962035 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-11-crc" podStartSLOduration=2.96202136 podStartE2EDuration="2.96202136s" podCreationTimestamp="2026-01-22 11:50:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:15.961463537 +0000 UTC m=+150.705411878" watchObservedRunningTime="2026-01-22 11:50:15.96202136 +0000 UTC m=+150.705969701" Jan 22 11:50:15 crc kubenswrapper[5120]: I0122 11:50:15.976684 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-lsqq6" event={"ID":"d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2","Type":"ContainerStarted","Data":"d0b13f4b17fa46610768ed8124e834029a023c21324df03453e3ee2901184dce"} Jan 22 11:50:15 crc kubenswrapper[5120]: I0122 11:50:15.976765 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-lsqq6" event={"ID":"d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2","Type":"ContainerStarted","Data":"00c2b34379e711ae18744911c1a948b8f3eaad8ba2b87b458e2e44e2eed2a37e"} Jan 22 11:50:16 crc kubenswrapper[5120]: I0122 11:50:16.007652 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-lsqq6" podStartSLOduration=20.007622504 podStartE2EDuration="20.007622504s" podCreationTimestamp="2026-01-22 11:49:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:16.003006902 +0000 UTC m=+150.746955263" watchObservedRunningTime="2026-01-22 11:50:16.007622504 +0000 UTC m=+150.751570865" Jan 22 11:50:16 crc kubenswrapper[5120]: I0122 11:50:16.424360 5120 patch_prober.go:28] interesting pod/downloads-747b44746d-btnnz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 22 11:50:16 crc kubenswrapper[5120]: I0122 11:50:16.424916 5120 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-btnnz" podUID="a1372d1c-9557-4da9-b571-ea78602f491f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 22 11:50:16 crc kubenswrapper[5120]: I0122 11:50:16.527796 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-d4ftw" Jan 22 11:50:16 crc kubenswrapper[5120]: I0122 11:50:16.998782 5120 generic.go:358] "Generic (PLEG): container finished" podID="1144df8b-88aa-4dd2-9b2c-ba41340bed9f" containerID="da7f6775170b711deee1d912dd17a150e8dc85403363664ea0cfd6e6d2a35197" exitCode=0 Jan 22 11:50:16 crc kubenswrapper[5120]: I0122 11:50:16.998848 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"1144df8b-88aa-4dd2-9b2c-ba41340bed9f","Type":"ContainerDied","Data":"da7f6775170b711deee1d912dd17a150e8dc85403363664ea0cfd6e6d2a35197"} Jan 22 11:50:17 crc kubenswrapper[5120]: I0122 11:50:17.511737 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-dpf6p" Jan 22 11:50:19 crc kubenswrapper[5120]: I0122 11:50:19.016136 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-49gkx" event={"ID":"e16334d5-3fa8-48de-a8e0-af1f9fa51926","Type":"ContainerStarted","Data":"e1bbb65cdff1f34e73b67d92dec5e5520f1d8e88ebcd7bef109e31c63042510c"} Jan 22 11:50:19 crc kubenswrapper[5120]: I0122 11:50:19.017015 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:19 crc kubenswrapper[5120]: I0122 11:50:19.373171 5120 patch_prober.go:28] interesting pod/downloads-747b44746d-btnnz container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 22 11:50:19 crc kubenswrapper[5120]: I0122 11:50:19.373382 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-btnnz" podUID="a1372d1c-9557-4da9-b571-ea78602f491f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 22 11:50:19 crc kubenswrapper[5120]: I0122 11:50:19.817616 5120 patch_prober.go:28] interesting pod/console-64d44f6ddf-7q8jr container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.25:8443/health\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Jan 22 11:50:19 crc kubenswrapper[5120]: I0122 11:50:19.817684 5120 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-7q8jr" podUID="efec95f9-a526-41f9-bd7c-0d1bd2505eda" containerName="console" probeResult="failure" output="Get \"https://10.217.0.25:8443/health\": dial tcp 10.217.0.25:8443: connect: connection refused" Jan 22 11:50:20 crc kubenswrapper[5120]: I0122 11:50:20.689413 5120 ???:1] "http: TLS handshake error from 192.168.126.11:49334: no serving certificate available for the kubelet" Jan 22 11:50:22 crc kubenswrapper[5120]: E0122 11:50:22.020381 5120 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b6626dbcfe2359c8932616225dead34356537fe01ca973f60304e807a266661f" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 22 11:50:22 crc kubenswrapper[5120]: E0122 11:50:22.022428 5120 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b6626dbcfe2359c8932616225dead34356537fe01ca973f60304e807a266661f" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 22 11:50:22 crc kubenswrapper[5120]: E0122 11:50:22.024059 5120 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b6626dbcfe2359c8932616225dead34356537fe01ca973f60304e807a266661f" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 22 11:50:22 crc kubenswrapper[5120]: E0122 11:50:22.024143 5120 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-mddkn" podUID="48ce43ae-5f5f-4ae6-91bd-98390a12c650" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 22 11:50:26 crc kubenswrapper[5120]: I0122 11:50:26.439789 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-747b44746d-btnnz" Jan 22 11:50:26 crc kubenswrapper[5120]: I0122 11:50:26.463827 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66587d64c8-49gkx" podStartSLOduration=141.463799641 podStartE2EDuration="2m21.463799641s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:19.059578207 +0000 UTC m=+153.803526548" watchObservedRunningTime="2026-01-22 11:50:26.463799641 +0000 UTC m=+161.207747982" Jan 22 11:50:29 crc kubenswrapper[5120]: I0122 11:50:29.820916 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-64d44f6ddf-7q8jr" Jan 22 11:50:29 crc kubenswrapper[5120]: I0122 11:50:29.826775 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-64d44f6ddf-7q8jr" Jan 22 11:50:32 crc kubenswrapper[5120]: E0122 11:50:32.024122 5120 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b6626dbcfe2359c8932616225dead34356537fe01ca973f60304e807a266661f" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 22 11:50:32 crc kubenswrapper[5120]: E0122 11:50:32.026524 5120 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b6626dbcfe2359c8932616225dead34356537fe01ca973f60304e807a266661f" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 22 11:50:32 crc kubenswrapper[5120]: E0122 11:50:32.028399 5120 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b6626dbcfe2359c8932616225dead34356537fe01ca973f60304e807a266661f" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 22 11:50:32 crc kubenswrapper[5120]: E0122 11:50:32.028446 5120 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-mddkn" podUID="48ce43ae-5f5f-4ae6-91bd-98390a12c650" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 22 11:50:37 crc kubenswrapper[5120]: I0122 11:50:37.515272 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-9hjpw" Jan 22 11:50:39 crc kubenswrapper[5120]: I0122 11:50:39.148357 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-mddkn_48ce43ae-5f5f-4ae6-91bd-98390a12c650/kube-multus-additional-cni-plugins/0.log" Jan 22 11:50:39 crc kubenswrapper[5120]: I0122 11:50:39.148431 5120 generic.go:358] "Generic (PLEG): container finished" podID="48ce43ae-5f5f-4ae6-91bd-98390a12c650" containerID="b6626dbcfe2359c8932616225dead34356537fe01ca973f60304e807a266661f" exitCode=137 Jan 22 11:50:39 crc kubenswrapper[5120]: I0122 11:50:39.148632 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-mddkn" event={"ID":"48ce43ae-5f5f-4ae6-91bd-98390a12c650","Type":"ContainerDied","Data":"b6626dbcfe2359c8932616225dead34356537fe01ca973f60304e807a266661f"} Jan 22 11:50:39 crc kubenswrapper[5120]: I0122 11:50:39.435075 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 11:50:40 crc kubenswrapper[5120]: I0122 11:50:40.027203 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:41 crc kubenswrapper[5120]: I0122 11:50:41.196434 5120 ???:1] "http: TLS handshake error from 192.168.126.11:48732: no serving certificate available for the kubelet" Jan 22 11:50:42 crc kubenswrapper[5120]: E0122 11:50:42.019404 5120 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b6626dbcfe2359c8932616225dead34356537fe01ca973f60304e807a266661f is running failed: container process not found" containerID="b6626dbcfe2359c8932616225dead34356537fe01ca973f60304e807a266661f" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 22 11:50:42 crc kubenswrapper[5120]: E0122 11:50:42.019976 5120 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b6626dbcfe2359c8932616225dead34356537fe01ca973f60304e807a266661f is running failed: container process not found" containerID="b6626dbcfe2359c8932616225dead34356537fe01ca973f60304e807a266661f" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 22 11:50:42 crc kubenswrapper[5120]: E0122 11:50:42.020287 5120 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b6626dbcfe2359c8932616225dead34356537fe01ca973f60304e807a266661f is running failed: container process not found" containerID="b6626dbcfe2359c8932616225dead34356537fe01ca973f60304e807a266661f" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 22 11:50:42 crc kubenswrapper[5120]: E0122 11:50:42.020519 5120 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b6626dbcfe2359c8932616225dead34356537fe01ca973f60304e807a266661f is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-mddkn" podUID="48ce43ae-5f5f-4ae6-91bd-98390a12c650" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 22 11:50:45 crc kubenswrapper[5120]: I0122 11:50:45.010468 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 22 11:50:45 crc kubenswrapper[5120]: I0122 11:50:45.101971 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1144df8b-88aa-4dd2-9b2c-ba41340bed9f-kubelet-dir\") pod \"1144df8b-88aa-4dd2-9b2c-ba41340bed9f\" (UID: \"1144df8b-88aa-4dd2-9b2c-ba41340bed9f\") " Jan 22 11:50:45 crc kubenswrapper[5120]: I0122 11:50:45.102180 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1144df8b-88aa-4dd2-9b2c-ba41340bed9f-kube-api-access\") pod \"1144df8b-88aa-4dd2-9b2c-ba41340bed9f\" (UID: \"1144df8b-88aa-4dd2-9b2c-ba41340bed9f\") " Jan 22 11:50:45 crc kubenswrapper[5120]: I0122 11:50:45.102189 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1144df8b-88aa-4dd2-9b2c-ba41340bed9f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1144df8b-88aa-4dd2-9b2c-ba41340bed9f" (UID: "1144df8b-88aa-4dd2-9b2c-ba41340bed9f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 11:50:45 crc kubenswrapper[5120]: I0122 11:50:45.104106 5120 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1144df8b-88aa-4dd2-9b2c-ba41340bed9f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 22 11:50:45 crc kubenswrapper[5120]: I0122 11:50:45.110448 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1144df8b-88aa-4dd2-9b2c-ba41340bed9f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1144df8b-88aa-4dd2-9b2c-ba41340bed9f" (UID: "1144df8b-88aa-4dd2-9b2c-ba41340bed9f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:50:45 crc kubenswrapper[5120]: I0122 11:50:45.158020 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-mddkn_48ce43ae-5f5f-4ae6-91bd-98390a12c650/kube-multus-additional-cni-plugins/0.log" Jan 22 11:50:45 crc kubenswrapper[5120]: I0122 11:50:45.158100 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-mddkn" Jan 22 11:50:45 crc kubenswrapper[5120]: I0122 11:50:45.191131 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"1144df8b-88aa-4dd2-9b2c-ba41340bed9f","Type":"ContainerDied","Data":"f6539fac927736fe00ed8becb89b97e99fa82f09b5f1b989a5e7d7d1eb99b316"} Jan 22 11:50:45 crc kubenswrapper[5120]: I0122 11:50:45.191171 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6539fac927736fe00ed8becb89b97e99fa82f09b5f1b989a5e7d7d1eb99b316" Jan 22 11:50:45 crc kubenswrapper[5120]: I0122 11:50:45.191280 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 22 11:50:45 crc kubenswrapper[5120]: I0122 11:50:45.195926 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-mddkn_48ce43ae-5f5f-4ae6-91bd-98390a12c650/kube-multus-additional-cni-plugins/0.log" Jan 22 11:50:45 crc kubenswrapper[5120]: I0122 11:50:45.196064 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-mddkn" event={"ID":"48ce43ae-5f5f-4ae6-91bd-98390a12c650","Type":"ContainerDied","Data":"224c53d4c2e0d2802958ae5a4e8f3773f21300049c7b7357bf9e459ec82f1d55"} Jan 22 11:50:45 crc kubenswrapper[5120]: I0122 11:50:45.196107 5120 scope.go:117] "RemoveContainer" containerID="b6626dbcfe2359c8932616225dead34356537fe01ca973f60304e807a266661f" Jan 22 11:50:45 crc kubenswrapper[5120]: I0122 11:50:45.196141 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-mddkn" Jan 22 11:50:45 crc kubenswrapper[5120]: I0122 11:50:45.205500 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/48ce43ae-5f5f-4ae6-91bd-98390a12c650-ready\") pod \"48ce43ae-5f5f-4ae6-91bd-98390a12c650\" (UID: \"48ce43ae-5f5f-4ae6-91bd-98390a12c650\") " Jan 22 11:50:45 crc kubenswrapper[5120]: I0122 11:50:45.205585 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdjp5\" (UniqueName: \"kubernetes.io/projected/48ce43ae-5f5f-4ae6-91bd-98390a12c650-kube-api-access-mdjp5\") pod \"48ce43ae-5f5f-4ae6-91bd-98390a12c650\" (UID: \"48ce43ae-5f5f-4ae6-91bd-98390a12c650\") " Jan 22 11:50:45 crc kubenswrapper[5120]: I0122 11:50:45.205651 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/48ce43ae-5f5f-4ae6-91bd-98390a12c650-cni-sysctl-allowlist\") pod \"48ce43ae-5f5f-4ae6-91bd-98390a12c650\" (UID: \"48ce43ae-5f5f-4ae6-91bd-98390a12c650\") " Jan 22 11:50:45 crc kubenswrapper[5120]: I0122 11:50:45.206370 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48ce43ae-5f5f-4ae6-91bd-98390a12c650-ready" (OuterVolumeSpecName: "ready") pod "48ce43ae-5f5f-4ae6-91bd-98390a12c650" (UID: "48ce43ae-5f5f-4ae6-91bd-98390a12c650"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:50:45 crc kubenswrapper[5120]: I0122 11:50:45.206558 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/48ce43ae-5f5f-4ae6-91bd-98390a12c650-tuning-conf-dir\") pod \"48ce43ae-5f5f-4ae6-91bd-98390a12c650\" (UID: \"48ce43ae-5f5f-4ae6-91bd-98390a12c650\") " Jan 22 11:50:45 crc kubenswrapper[5120]: I0122 11:50:45.206593 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48ce43ae-5f5f-4ae6-91bd-98390a12c650-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "48ce43ae-5f5f-4ae6-91bd-98390a12c650" (UID: "48ce43ae-5f5f-4ae6-91bd-98390a12c650"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:50:45 crc kubenswrapper[5120]: I0122 11:50:45.206635 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/48ce43ae-5f5f-4ae6-91bd-98390a12c650-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "48ce43ae-5f5f-4ae6-91bd-98390a12c650" (UID: "48ce43ae-5f5f-4ae6-91bd-98390a12c650"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 11:50:45 crc kubenswrapper[5120]: I0122 11:50:45.207582 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1144df8b-88aa-4dd2-9b2c-ba41340bed9f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 11:50:45 crc kubenswrapper[5120]: I0122 11:50:45.207611 5120 reconciler_common.go:299] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/48ce43ae-5f5f-4ae6-91bd-98390a12c650-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Jan 22 11:50:45 crc kubenswrapper[5120]: I0122 11:50:45.207626 5120 reconciler_common.go:299] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/48ce43ae-5f5f-4ae6-91bd-98390a12c650-ready\") on node \"crc\" DevicePath \"\"" Jan 22 11:50:45 crc kubenswrapper[5120]: I0122 11:50:45.207639 5120 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/48ce43ae-5f5f-4ae6-91bd-98390a12c650-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 22 11:50:45 crc kubenswrapper[5120]: I0122 11:50:45.209901 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48ce43ae-5f5f-4ae6-91bd-98390a12c650-kube-api-access-mdjp5" (OuterVolumeSpecName: "kube-api-access-mdjp5") pod "48ce43ae-5f5f-4ae6-91bd-98390a12c650" (UID: "48ce43ae-5f5f-4ae6-91bd-98390a12c650"). InnerVolumeSpecName "kube-api-access-mdjp5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:50:45 crc kubenswrapper[5120]: I0122 11:50:45.309147 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mdjp5\" (UniqueName: \"kubernetes.io/projected/48ce43ae-5f5f-4ae6-91bd-98390a12c650-kube-api-access-mdjp5\") on node \"crc\" DevicePath \"\"" Jan 22 11:50:45 crc kubenswrapper[5120]: I0122 11:50:45.565837 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-mddkn"] Jan 22 11:50:45 crc kubenswrapper[5120]: I0122 11:50:45.586806 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-mddkn"] Jan 22 11:50:46 crc kubenswrapper[5120]: I0122 11:50:46.206677 5120 generic.go:358] "Generic (PLEG): container finished" podID="ed489f01-1188-4d6f-9ed4-9618fddf1eab" containerID="b00909583aa1447b916f95649d778fe12290cadd6b431d88809c3682cc826759" exitCode=0 Jan 22 11:50:46 crc kubenswrapper[5120]: I0122 11:50:46.206742 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2q8d8" event={"ID":"ed489f01-1188-4d6f-9ed4-9618fddf1eab","Type":"ContainerDied","Data":"b00909583aa1447b916f95649d778fe12290cadd6b431d88809c3682cc826759"} Jan 22 11:50:46 crc kubenswrapper[5120]: I0122 11:50:46.211943 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mbm7w" event={"ID":"fda19cab-4c2e-47a2-993c-ce6f3795e561","Type":"ContainerStarted","Data":"0f93aadd0112a21eacebe8630496cabe8f22f4bbdfd32043b156cba561df7b59"} Jan 22 11:50:46 crc kubenswrapper[5120]: I0122 11:50:46.215147 5120 generic.go:358] "Generic (PLEG): container finished" podID="5a52d1c0-c55c-47b4-936e-a783304a0e89" containerID="04e86588d8fba653a7e46769775e0363411492a2faa05c1b5793a39fc530062e" exitCode=0 Jan 22 11:50:46 crc kubenswrapper[5120]: I0122 11:50:46.215289 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z5nvn" event={"ID":"5a52d1c0-c55c-47b4-936e-a783304a0e89","Type":"ContainerDied","Data":"04e86588d8fba653a7e46769775e0363411492a2faa05c1b5793a39fc530062e"} Jan 22 11:50:46 crc kubenswrapper[5120]: I0122 11:50:46.225050 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t67f7" event={"ID":"df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b","Type":"ContainerStarted","Data":"6323b4a422b08b7fef939c6ed6bea5dc74a608973ed5a0ca42c7b5bd1a193d40"} Jan 22 11:50:46 crc kubenswrapper[5120]: I0122 11:50:46.238164 5120 generic.go:358] "Generic (PLEG): container finished" podID="316646c5-1898-417a-8bd7-00eeadfe1243" containerID="78a91413b2d3e4e902040629ec2a3493284930cedd944b03f3abad707da16bcb" exitCode=0 Jan 22 11:50:46 crc kubenswrapper[5120]: I0122 11:50:46.238264 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rp8qf" event={"ID":"316646c5-1898-417a-8bd7-00eeadfe1243","Type":"ContainerDied","Data":"78a91413b2d3e4e902040629ec2a3493284930cedd944b03f3abad707da16bcb"} Jan 22 11:50:46 crc kubenswrapper[5120]: I0122 11:50:46.241596 5120 generic.go:358] "Generic (PLEG): container finished" podID="3e95505c-a7eb-4d9f-be2f-e7129e3643b8" containerID="c75872699b265f647f93429326d1a8652dfa1cbe0ac2767c1c24f307072383a1" exitCode=0 Jan 22 11:50:46 crc kubenswrapper[5120]: I0122 11:50:46.241749 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tbgcq" event={"ID":"3e95505c-a7eb-4d9f-be2f-e7129e3643b8","Type":"ContainerDied","Data":"c75872699b265f647f93429326d1a8652dfa1cbe0ac2767c1c24f307072383a1"} Jan 22 11:50:46 crc kubenswrapper[5120]: I0122 11:50:46.259332 5120 generic.go:358] "Generic (PLEG): container finished" podID="089fc2c1-8274-4532-a14a-21194d01a310" containerID="9bc291a555447cad49a14283506bdb0035ead9ce2860615680f3af52e9dceda9" exitCode=0 Jan 22 11:50:46 crc kubenswrapper[5120]: I0122 11:50:46.259484 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p26dp" event={"ID":"089fc2c1-8274-4532-a14a-21194d01a310","Type":"ContainerDied","Data":"9bc291a555447cad49a14283506bdb0035ead9ce2860615680f3af52e9dceda9"} Jan 22 11:50:46 crc kubenswrapper[5120]: I0122 11:50:46.263365 5120 generic.go:358] "Generic (PLEG): container finished" podID="4f669e70-10cd-47da-abc9-84be80cb5cfb" containerID="30e86473793b92399bf3776be18ddc5b871c9f007c3f96eb1763cfef12eaf5fe" exitCode=0 Jan 22 11:50:46 crc kubenswrapper[5120]: I0122 11:50:46.263413 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fztfm" event={"ID":"4f669e70-10cd-47da-abc9-84be80cb5cfb","Type":"ContainerDied","Data":"30e86473793b92399bf3776be18ddc5b871c9f007c3f96eb1763cfef12eaf5fe"} Jan 22 11:50:46 crc kubenswrapper[5120]: E0122 11:50:46.751029 5120 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a52d1c0_c55c_47b4_936e_a783304a0e89.slice/crio-04e86588d8fba653a7e46769775e0363411492a2faa05c1b5793a39fc530062e.scope\": RecentStats: unable to find data in memory cache]" Jan 22 11:50:47 crc kubenswrapper[5120]: I0122 11:50:47.274736 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p26dp" event={"ID":"089fc2c1-8274-4532-a14a-21194d01a310","Type":"ContainerStarted","Data":"a3a3097fd4339ce32794c09b0be56788819c79a81ede80e9fdec2115b13052f2"} Jan 22 11:50:47 crc kubenswrapper[5120]: I0122 11:50:47.277059 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fztfm" event={"ID":"4f669e70-10cd-47da-abc9-84be80cb5cfb","Type":"ContainerStarted","Data":"bb1b2eda9dfc535bf2571cb8ca9c5b1fc9f5f3199ff1d0107b99fac41ee37f68"} Jan 22 11:50:47 crc kubenswrapper[5120]: I0122 11:50:47.279334 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2q8d8" event={"ID":"ed489f01-1188-4d6f-9ed4-9618fddf1eab","Type":"ContainerStarted","Data":"36cd9934f20a92aa13326a062a7c371f5422564071ae91c2740e1a07898b4c02"} Jan 22 11:50:47 crc kubenswrapper[5120]: I0122 11:50:47.281731 5120 generic.go:358] "Generic (PLEG): container finished" podID="fda19cab-4c2e-47a2-993c-ce6f3795e561" containerID="0f93aadd0112a21eacebe8630496cabe8f22f4bbdfd32043b156cba561df7b59" exitCode=0 Jan 22 11:50:47 crc kubenswrapper[5120]: I0122 11:50:47.281794 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mbm7w" event={"ID":"fda19cab-4c2e-47a2-993c-ce6f3795e561","Type":"ContainerDied","Data":"0f93aadd0112a21eacebe8630496cabe8f22f4bbdfd32043b156cba561df7b59"} Jan 22 11:50:47 crc kubenswrapper[5120]: I0122 11:50:47.284536 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z5nvn" event={"ID":"5a52d1c0-c55c-47b4-936e-a783304a0e89","Type":"ContainerStarted","Data":"e94ae6f6e61790076393376b71522698c65d8d872bdfe197441f1ede23e779f0"} Jan 22 11:50:47 crc kubenswrapper[5120]: I0122 11:50:47.288678 5120 generic.go:358] "Generic (PLEG): container finished" podID="df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b" containerID="6323b4a422b08b7fef939c6ed6bea5dc74a608973ed5a0ca42c7b5bd1a193d40" exitCode=0 Jan 22 11:50:47 crc kubenswrapper[5120]: I0122 11:50:47.288881 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t67f7" event={"ID":"df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b","Type":"ContainerDied","Data":"6323b4a422b08b7fef939c6ed6bea5dc74a608973ed5a0ca42c7b5bd1a193d40"} Jan 22 11:50:47 crc kubenswrapper[5120]: I0122 11:50:47.297113 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rp8qf" event={"ID":"316646c5-1898-417a-8bd7-00eeadfe1243","Type":"ContainerStarted","Data":"043c30ef82e1600d2b7aee310c29468c886daf6f11ea610b5aafacd7353aca42"} Jan 22 11:50:47 crc kubenswrapper[5120]: I0122 11:50:47.301013 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tbgcq" event={"ID":"3e95505c-a7eb-4d9f-be2f-e7129e3643b8","Type":"ContainerStarted","Data":"f4f7bc0583697b2f695f6f1c26c7ce5ff64e708099c05083dc3b1510e1605486"} Jan 22 11:50:47 crc kubenswrapper[5120]: I0122 11:50:47.308061 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-p26dp" podStartSLOduration=4.915322265 podStartE2EDuration="39.30804356s" podCreationTimestamp="2026-01-22 11:50:08 +0000 UTC" firstStartedPulling="2026-01-22 11:50:10.742632237 +0000 UTC m=+145.486580578" lastFinishedPulling="2026-01-22 11:50:45.135353522 +0000 UTC m=+179.879301873" observedRunningTime="2026-01-22 11:50:47.304941204 +0000 UTC m=+182.048889555" watchObservedRunningTime="2026-01-22 11:50:47.30804356 +0000 UTC m=+182.051991901" Jan 22 11:50:47 crc kubenswrapper[5120]: I0122 11:50:47.347747 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-z5nvn" podStartSLOduration=4.064584476 podStartE2EDuration="37.34772834s" podCreationTimestamp="2026-01-22 11:50:10 +0000 UTC" firstStartedPulling="2026-01-22 11:50:11.807198018 +0000 UTC m=+146.551146359" lastFinishedPulling="2026-01-22 11:50:45.090341882 +0000 UTC m=+179.834290223" observedRunningTime="2026-01-22 11:50:47.329890748 +0000 UTC m=+182.073839089" watchObservedRunningTime="2026-01-22 11:50:47.34772834 +0000 UTC m=+182.091676681" Jan 22 11:50:47 crc kubenswrapper[5120]: I0122 11:50:47.349489 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fztfm" podStartSLOduration=5.980882182 podStartE2EDuration="40.349482532s" podCreationTimestamp="2026-01-22 11:50:07 +0000 UTC" firstStartedPulling="2026-01-22 11:50:10.767167382 +0000 UTC m=+145.511115723" lastFinishedPulling="2026-01-22 11:50:45.135767732 +0000 UTC m=+179.879716073" observedRunningTime="2026-01-22 11:50:47.348109179 +0000 UTC m=+182.092057540" watchObservedRunningTime="2026-01-22 11:50:47.349482532 +0000 UTC m=+182.093430873" Jan 22 11:50:47 crc kubenswrapper[5120]: I0122 11:50:47.415177 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2q8d8" podStartSLOduration=6.037479351 podStartE2EDuration="40.415152912s" podCreationTimestamp="2026-01-22 11:50:07 +0000 UTC" firstStartedPulling="2026-01-22 11:50:10.779753286 +0000 UTC m=+145.523701627" lastFinishedPulling="2026-01-22 11:50:45.157426857 +0000 UTC m=+179.901375188" observedRunningTime="2026-01-22 11:50:47.413519463 +0000 UTC m=+182.157467804" watchObservedRunningTime="2026-01-22 11:50:47.415152912 +0000 UTC m=+182.159101243" Jan 22 11:50:47 crc kubenswrapper[5120]: I0122 11:50:47.500866 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rp8qf" podStartSLOduration=5.193443166 podStartE2EDuration="38.500836726s" podCreationTimestamp="2026-01-22 11:50:09 +0000 UTC" firstStartedPulling="2026-01-22 11:50:11.82168501 +0000 UTC m=+146.565633351" lastFinishedPulling="2026-01-22 11:50:45.12907857 +0000 UTC m=+179.873026911" observedRunningTime="2026-01-22 11:50:47.467716765 +0000 UTC m=+182.211665106" watchObservedRunningTime="2026-01-22 11:50:47.500836726 +0000 UTC m=+182.244785067" Jan 22 11:50:47 crc kubenswrapper[5120]: I0122 11:50:47.501909 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tbgcq" podStartSLOduration=5.115966352 podStartE2EDuration="39.501900182s" podCreationTimestamp="2026-01-22 11:50:08 +0000 UTC" firstStartedPulling="2026-01-22 11:50:10.738000306 +0000 UTC m=+145.481948647" lastFinishedPulling="2026-01-22 11:50:45.123934136 +0000 UTC m=+179.867882477" observedRunningTime="2026-01-22 11:50:47.498451539 +0000 UTC m=+182.242399890" watchObservedRunningTime="2026-01-22 11:50:47.501900182 +0000 UTC m=+182.245848523" Jan 22 11:50:47 crc kubenswrapper[5120]: I0122 11:50:47.562626 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Jan 22 11:50:47 crc kubenswrapper[5120]: I0122 11:50:47.563786 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="48ce43ae-5f5f-4ae6-91bd-98390a12c650" containerName="kube-multus-additional-cni-plugins" Jan 22 11:50:47 crc kubenswrapper[5120]: I0122 11:50:47.563897 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="48ce43ae-5f5f-4ae6-91bd-98390a12c650" containerName="kube-multus-additional-cni-plugins" Jan 22 11:50:47 crc kubenswrapper[5120]: I0122 11:50:47.563989 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="20ed0804-5c2e-4054-a7af-c90d2103aacb" containerName="pruner" Jan 22 11:50:47 crc kubenswrapper[5120]: I0122 11:50:47.564043 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="20ed0804-5c2e-4054-a7af-c90d2103aacb" containerName="pruner" Jan 22 11:50:47 crc kubenswrapper[5120]: I0122 11:50:47.564096 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1144df8b-88aa-4dd2-9b2c-ba41340bed9f" containerName="pruner" Jan 22 11:50:47 crc kubenswrapper[5120]: I0122 11:50:47.564154 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="1144df8b-88aa-4dd2-9b2c-ba41340bed9f" containerName="pruner" Jan 22 11:50:47 crc kubenswrapper[5120]: I0122 11:50:47.564326 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="1144df8b-88aa-4dd2-9b2c-ba41340bed9f" containerName="pruner" Jan 22 11:50:47 crc kubenswrapper[5120]: I0122 11:50:47.564400 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="48ce43ae-5f5f-4ae6-91bd-98390a12c650" containerName="kube-multus-additional-cni-plugins" Jan 22 11:50:47 crc kubenswrapper[5120]: I0122 11:50:47.564457 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="20ed0804-5c2e-4054-a7af-c90d2103aacb" containerName="pruner" Jan 22 11:50:47 crc kubenswrapper[5120]: I0122 11:50:47.568308 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 22 11:50:47 crc kubenswrapper[5120]: I0122 11:50:47.576398 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Jan 22 11:50:47 crc kubenswrapper[5120]: I0122 11:50:47.578394 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Jan 22 11:50:47 crc kubenswrapper[5120]: I0122 11:50:47.579478 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48ce43ae-5f5f-4ae6-91bd-98390a12c650" path="/var/lib/kubelet/pods/48ce43ae-5f5f-4ae6-91bd-98390a12c650/volumes" Jan 22 11:50:47 crc kubenswrapper[5120]: I0122 11:50:47.580257 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Jan 22 11:50:47 crc kubenswrapper[5120]: I0122 11:50:47.670872 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/017df5fc-18b4-45b8-af70-249c5434d3dd-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"017df5fc-18b4-45b8-af70-249c5434d3dd\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 22 11:50:47 crc kubenswrapper[5120]: I0122 11:50:47.671378 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/017df5fc-18b4-45b8-af70-249c5434d3dd-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"017df5fc-18b4-45b8-af70-249c5434d3dd\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 22 11:50:47 crc kubenswrapper[5120]: I0122 11:50:47.772476 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/017df5fc-18b4-45b8-af70-249c5434d3dd-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"017df5fc-18b4-45b8-af70-249c5434d3dd\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 22 11:50:47 crc kubenswrapper[5120]: I0122 11:50:47.772588 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/017df5fc-18b4-45b8-af70-249c5434d3dd-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"017df5fc-18b4-45b8-af70-249c5434d3dd\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 22 11:50:47 crc kubenswrapper[5120]: I0122 11:50:47.772668 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/017df5fc-18b4-45b8-af70-249c5434d3dd-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"017df5fc-18b4-45b8-af70-249c5434d3dd\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 22 11:50:47 crc kubenswrapper[5120]: I0122 11:50:47.798197 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/017df5fc-18b4-45b8-af70-249c5434d3dd-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"017df5fc-18b4-45b8-af70-249c5434d3dd\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 22 11:50:47 crc kubenswrapper[5120]: I0122 11:50:47.882771 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 22 11:50:48 crc kubenswrapper[5120]: I0122 11:50:48.272794 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Jan 22 11:50:48 crc kubenswrapper[5120]: W0122 11:50:48.283553 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod017df5fc_18b4_45b8_af70_249c5434d3dd.slice/crio-079a446d9ff3699aaec584981e4b14121451442a9df70678d25ce59d3766ab54 WatchSource:0}: Error finding container 079a446d9ff3699aaec584981e4b14121451442a9df70678d25ce59d3766ab54: Status 404 returned error can't find the container with id 079a446d9ff3699aaec584981e4b14121451442a9df70678d25ce59d3766ab54 Jan 22 11:50:48 crc kubenswrapper[5120]: I0122 11:50:48.314257 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mbm7w" event={"ID":"fda19cab-4c2e-47a2-993c-ce6f3795e561","Type":"ContainerStarted","Data":"f06bad76aa0a0af81a23a0c7892445f4237f1858924bdaae4e0635ae65173fe2"} Jan 22 11:50:48 crc kubenswrapper[5120]: I0122 11:50:48.319507 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t67f7" event={"ID":"df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b","Type":"ContainerStarted","Data":"0a55a93788e2f3a3da24ed47901056711624f745dc882f8044ade2936144a4cd"} Jan 22 11:50:48 crc kubenswrapper[5120]: I0122 11:50:48.323810 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"017df5fc-18b4-45b8-af70-249c5434d3dd","Type":"ContainerStarted","Data":"079a446d9ff3699aaec584981e4b14121451442a9df70678d25ce59d3766ab54"} Jan 22 11:50:48 crc kubenswrapper[5120]: I0122 11:50:48.368705 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-t67f7" podStartSLOduration=6.034302011 podStartE2EDuration="38.368679945s" podCreationTimestamp="2026-01-22 11:50:10 +0000 UTC" firstStartedPulling="2026-01-22 11:50:12.885481693 +0000 UTC m=+147.629430034" lastFinishedPulling="2026-01-22 11:50:45.219859637 +0000 UTC m=+179.963807968" observedRunningTime="2026-01-22 11:50:48.368309137 +0000 UTC m=+183.112257488" watchObservedRunningTime="2026-01-22 11:50:48.368679945 +0000 UTC m=+183.112628286" Jan 22 11:50:48 crc kubenswrapper[5120]: I0122 11:50:48.368909 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mbm7w" podStartSLOduration=5.101359673 podStartE2EDuration="37.368905161s" podCreationTimestamp="2026-01-22 11:50:11 +0000 UTC" firstStartedPulling="2026-01-22 11:50:12.871510274 +0000 UTC m=+147.615458615" lastFinishedPulling="2026-01-22 11:50:45.139055762 +0000 UTC m=+179.883004103" observedRunningTime="2026-01-22 11:50:48.339433177 +0000 UTC m=+183.083381538" watchObservedRunningTime="2026-01-22 11:50:48.368905161 +0000 UTC m=+183.112853492" Jan 22 11:50:48 crc kubenswrapper[5120]: I0122 11:50:48.577878 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2q8d8" Jan 22 11:50:48 crc kubenswrapper[5120]: I0122 11:50:48.577945 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-2q8d8" Jan 22 11:50:48 crc kubenswrapper[5120]: I0122 11:50:48.596357 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fztfm" Jan 22 11:50:48 crc kubenswrapper[5120]: I0122 11:50:48.596432 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-fztfm" Jan 22 11:50:48 crc kubenswrapper[5120]: I0122 11:50:48.602356 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-p26dp" Jan 22 11:50:48 crc kubenswrapper[5120]: I0122 11:50:48.602449 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-p26dp" Jan 22 11:50:48 crc kubenswrapper[5120]: I0122 11:50:48.702777 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fztfm" Jan 22 11:50:48 crc kubenswrapper[5120]: I0122 11:50:48.714790 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-p26dp" Jan 22 11:50:48 crc kubenswrapper[5120]: I0122 11:50:48.737119 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-tbgcq" Jan 22 11:50:48 crc kubenswrapper[5120]: I0122 11:50:48.737162 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tbgcq" Jan 22 11:50:49 crc kubenswrapper[5120]: I0122 11:50:49.334993 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"017df5fc-18b4-45b8-af70-249c5434d3dd","Type":"ContainerStarted","Data":"50ddb36530baaab6da9a203e91790393b50ad35da33e0c7be9ca4f1650c4872d"} Jan 22 11:50:49 crc kubenswrapper[5120]: I0122 11:50:49.357051 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-12-crc" podStartSLOduration=2.357015451 podStartE2EDuration="2.357015451s" podCreationTimestamp="2026-01-22 11:50:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:49.352115273 +0000 UTC m=+184.096063604" watchObservedRunningTime="2026-01-22 11:50:49.357015451 +0000 UTC m=+184.100963792" Jan 22 11:50:49 crc kubenswrapper[5120]: I0122 11:50:49.714187 5120 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-2q8d8" podUID="ed489f01-1188-4d6f-9ed4-9618fddf1eab" containerName="registry-server" probeResult="failure" output=< Jan 22 11:50:49 crc kubenswrapper[5120]: timeout: failed to connect service ":50051" within 1s Jan 22 11:50:49 crc kubenswrapper[5120]: > Jan 22 11:50:49 crc kubenswrapper[5120]: I0122 11:50:49.779747 5120 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-tbgcq" podUID="3e95505c-a7eb-4d9f-be2f-e7129e3643b8" containerName="registry-server" probeResult="failure" output=< Jan 22 11:50:49 crc kubenswrapper[5120]: timeout: failed to connect service ":50051" within 1s Jan 22 11:50:49 crc kubenswrapper[5120]: > Jan 22 11:50:50 crc kubenswrapper[5120]: I0122 11:50:50.041027 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rp8qf" Jan 22 11:50:50 crc kubenswrapper[5120]: I0122 11:50:50.041099 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-rp8qf" Jan 22 11:50:50 crc kubenswrapper[5120]: I0122 11:50:50.101157 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rp8qf" Jan 22 11:50:50 crc kubenswrapper[5120]: I0122 11:50:50.341627 5120 generic.go:358] "Generic (PLEG): container finished" podID="017df5fc-18b4-45b8-af70-249c5434d3dd" containerID="50ddb36530baaab6da9a203e91790393b50ad35da33e0c7be9ca4f1650c4872d" exitCode=0 Jan 22 11:50:50 crc kubenswrapper[5120]: I0122 11:50:50.341722 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"017df5fc-18b4-45b8-af70-249c5434d3dd","Type":"ContainerDied","Data":"50ddb36530baaab6da9a203e91790393b50ad35da33e0c7be9ca4f1650c4872d"} Jan 22 11:50:50 crc kubenswrapper[5120]: I0122 11:50:50.420731 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-z5nvn" Jan 22 11:50:50 crc kubenswrapper[5120]: I0122 11:50:50.420786 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-z5nvn" Jan 22 11:50:50 crc kubenswrapper[5120]: I0122 11:50:50.490894 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-z5nvn" Jan 22 11:50:51 crc kubenswrapper[5120]: I0122 11:50:51.394589 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-z5nvn" Jan 22 11:50:51 crc kubenswrapper[5120]: I0122 11:50:51.395050 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rp8qf" Jan 22 11:50:51 crc kubenswrapper[5120]: I0122 11:50:51.551913 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-t67f7" Jan 22 11:50:51 crc kubenswrapper[5120]: I0122 11:50:51.552387 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-t67f7" Jan 22 11:50:51 crc kubenswrapper[5120]: I0122 11:50:51.642067 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 22 11:50:51 crc kubenswrapper[5120]: I0122 11:50:51.731710 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/017df5fc-18b4-45b8-af70-249c5434d3dd-kube-api-access\") pod \"017df5fc-18b4-45b8-af70-249c5434d3dd\" (UID: \"017df5fc-18b4-45b8-af70-249c5434d3dd\") " Jan 22 11:50:51 crc kubenswrapper[5120]: I0122 11:50:51.731773 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/017df5fc-18b4-45b8-af70-249c5434d3dd-kubelet-dir\") pod \"017df5fc-18b4-45b8-af70-249c5434d3dd\" (UID: \"017df5fc-18b4-45b8-af70-249c5434d3dd\") " Jan 22 11:50:51 crc kubenswrapper[5120]: I0122 11:50:51.731976 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/017df5fc-18b4-45b8-af70-249c5434d3dd-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "017df5fc-18b4-45b8-af70-249c5434d3dd" (UID: "017df5fc-18b4-45b8-af70-249c5434d3dd"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 11:50:51 crc kubenswrapper[5120]: I0122 11:50:51.732199 5120 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/017df5fc-18b4-45b8-af70-249c5434d3dd-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 22 11:50:51 crc kubenswrapper[5120]: I0122 11:50:51.750704 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/017df5fc-18b4-45b8-af70-249c5434d3dd-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "017df5fc-18b4-45b8-af70-249c5434d3dd" (UID: "017df5fc-18b4-45b8-af70-249c5434d3dd"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:50:51 crc kubenswrapper[5120]: I0122 11:50:51.829396 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mbm7w" Jan 22 11:50:51 crc kubenswrapper[5120]: I0122 11:50:51.829440 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-mbm7w" Jan 22 11:50:51 crc kubenswrapper[5120]: I0122 11:50:51.833371 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/017df5fc-18b4-45b8-af70-249c5434d3dd-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 11:50:52 crc kubenswrapper[5120]: I0122 11:50:52.355832 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 22 11:50:52 crc kubenswrapper[5120]: I0122 11:50:52.355875 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"017df5fc-18b4-45b8-af70-249c5434d3dd","Type":"ContainerDied","Data":"079a446d9ff3699aaec584981e4b14121451442a9df70678d25ce59d3766ab54"} Jan 22 11:50:52 crc kubenswrapper[5120]: I0122 11:50:52.356167 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="079a446d9ff3699aaec584981e4b14121451442a9df70678d25ce59d3766ab54" Jan 22 11:50:52 crc kubenswrapper[5120]: I0122 11:50:52.592895 5120 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t67f7" podUID="df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b" containerName="registry-server" probeResult="failure" output=< Jan 22 11:50:52 crc kubenswrapper[5120]: timeout: failed to connect service ":50051" within 1s Jan 22 11:50:52 crc kubenswrapper[5120]: > Jan 22 11:50:52 crc kubenswrapper[5120]: I0122 11:50:52.645737 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-z5nvn"] Jan 22 11:50:52 crc kubenswrapper[5120]: I0122 11:50:52.869847 5120 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mbm7w" podUID="fda19cab-4c2e-47a2-993c-ce6f3795e561" containerName="registry-server" probeResult="failure" output=< Jan 22 11:50:52 crc kubenswrapper[5120]: timeout: failed to connect service ":50051" within 1s Jan 22 11:50:52 crc kubenswrapper[5120]: > Jan 22 11:50:53 crc kubenswrapper[5120]: I0122 11:50:53.360554 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-z5nvn" podUID="5a52d1c0-c55c-47b4-936e-a783304a0e89" containerName="registry-server" containerID="cri-o://e94ae6f6e61790076393376b71522698c65d8d872bdfe197441f1ede23e779f0" gracePeriod=2 Jan 22 11:50:54 crc kubenswrapper[5120]: I0122 11:50:54.368349 5120 generic.go:358] "Generic (PLEG): container finished" podID="5a52d1c0-c55c-47b4-936e-a783304a0e89" containerID="e94ae6f6e61790076393376b71522698c65d8d872bdfe197441f1ede23e779f0" exitCode=0 Jan 22 11:50:54 crc kubenswrapper[5120]: I0122 11:50:54.368486 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z5nvn" event={"ID":"5a52d1c0-c55c-47b4-936e-a783304a0e89","Type":"ContainerDied","Data":"e94ae6f6e61790076393376b71522698c65d8d872bdfe197441f1ede23e779f0"} Jan 22 11:50:54 crc kubenswrapper[5120]: I0122 11:50:54.562520 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Jan 22 11:50:54 crc kubenswrapper[5120]: I0122 11:50:54.563597 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="017df5fc-18b4-45b8-af70-249c5434d3dd" containerName="pruner" Jan 22 11:50:54 crc kubenswrapper[5120]: I0122 11:50:54.563627 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="017df5fc-18b4-45b8-af70-249c5434d3dd" containerName="pruner" Jan 22 11:50:54 crc kubenswrapper[5120]: I0122 11:50:54.564043 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="017df5fc-18b4-45b8-af70-249c5434d3dd" containerName="pruner" Jan 22 11:50:54 crc kubenswrapper[5120]: I0122 11:50:54.576595 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Jan 22 11:50:54 crc kubenswrapper[5120]: I0122 11:50:54.576771 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 22 11:50:54 crc kubenswrapper[5120]: I0122 11:50:54.579134 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Jan 22 11:50:54 crc kubenswrapper[5120]: I0122 11:50:54.579903 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Jan 22 11:50:54 crc kubenswrapper[5120]: I0122 11:50:54.674799 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f30ae543-bf57-4bbc-9c40-25ceab4603c6-var-lock\") pod \"installer-12-crc\" (UID: \"f30ae543-bf57-4bbc-9c40-25ceab4603c6\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 22 11:50:54 crc kubenswrapper[5120]: I0122 11:50:54.674902 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f30ae543-bf57-4bbc-9c40-25ceab4603c6-kubelet-dir\") pod \"installer-12-crc\" (UID: \"f30ae543-bf57-4bbc-9c40-25ceab4603c6\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 22 11:50:54 crc kubenswrapper[5120]: I0122 11:50:54.675007 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f30ae543-bf57-4bbc-9c40-25ceab4603c6-kube-api-access\") pod \"installer-12-crc\" (UID: \"f30ae543-bf57-4bbc-9c40-25ceab4603c6\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 22 11:50:54 crc kubenswrapper[5120]: I0122 11:50:54.776713 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f30ae543-bf57-4bbc-9c40-25ceab4603c6-kubelet-dir\") pod \"installer-12-crc\" (UID: \"f30ae543-bf57-4bbc-9c40-25ceab4603c6\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 22 11:50:54 crc kubenswrapper[5120]: I0122 11:50:54.776784 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f30ae543-bf57-4bbc-9c40-25ceab4603c6-kube-api-access\") pod \"installer-12-crc\" (UID: \"f30ae543-bf57-4bbc-9c40-25ceab4603c6\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 22 11:50:54 crc kubenswrapper[5120]: I0122 11:50:54.776815 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f30ae543-bf57-4bbc-9c40-25ceab4603c6-var-lock\") pod \"installer-12-crc\" (UID: \"f30ae543-bf57-4bbc-9c40-25ceab4603c6\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 22 11:50:54 crc kubenswrapper[5120]: I0122 11:50:54.776904 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f30ae543-bf57-4bbc-9c40-25ceab4603c6-var-lock\") pod \"installer-12-crc\" (UID: \"f30ae543-bf57-4bbc-9c40-25ceab4603c6\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 22 11:50:54 crc kubenswrapper[5120]: I0122 11:50:54.776943 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f30ae543-bf57-4bbc-9c40-25ceab4603c6-kubelet-dir\") pod \"installer-12-crc\" (UID: \"f30ae543-bf57-4bbc-9c40-25ceab4603c6\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 22 11:50:54 crc kubenswrapper[5120]: I0122 11:50:54.796234 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f30ae543-bf57-4bbc-9c40-25ceab4603c6-kube-api-access\") pod \"installer-12-crc\" (UID: \"f30ae543-bf57-4bbc-9c40-25ceab4603c6\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 22 11:50:54 crc kubenswrapper[5120]: I0122 11:50:54.824674 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z5nvn" Jan 22 11:50:54 crc kubenswrapper[5120]: I0122 11:50:54.911575 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 22 11:50:54 crc kubenswrapper[5120]: I0122 11:50:54.979795 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ctgj\" (UniqueName: \"kubernetes.io/projected/5a52d1c0-c55c-47b4-936e-a783304a0e89-kube-api-access-2ctgj\") pod \"5a52d1c0-c55c-47b4-936e-a783304a0e89\" (UID: \"5a52d1c0-c55c-47b4-936e-a783304a0e89\") " Jan 22 11:50:54 crc kubenswrapper[5120]: I0122 11:50:54.980048 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a52d1c0-c55c-47b4-936e-a783304a0e89-utilities\") pod \"5a52d1c0-c55c-47b4-936e-a783304a0e89\" (UID: \"5a52d1c0-c55c-47b4-936e-a783304a0e89\") " Jan 22 11:50:54 crc kubenswrapper[5120]: I0122 11:50:54.980088 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a52d1c0-c55c-47b4-936e-a783304a0e89-catalog-content\") pod \"5a52d1c0-c55c-47b4-936e-a783304a0e89\" (UID: \"5a52d1c0-c55c-47b4-936e-a783304a0e89\") " Jan 22 11:50:54 crc kubenswrapper[5120]: I0122 11:50:54.981225 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a52d1c0-c55c-47b4-936e-a783304a0e89-utilities" (OuterVolumeSpecName: "utilities") pod "5a52d1c0-c55c-47b4-936e-a783304a0e89" (UID: "5a52d1c0-c55c-47b4-936e-a783304a0e89"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:50:54 crc kubenswrapper[5120]: I0122 11:50:54.985456 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a52d1c0-c55c-47b4-936e-a783304a0e89-kube-api-access-2ctgj" (OuterVolumeSpecName: "kube-api-access-2ctgj") pod "5a52d1c0-c55c-47b4-936e-a783304a0e89" (UID: "5a52d1c0-c55c-47b4-936e-a783304a0e89"). InnerVolumeSpecName "kube-api-access-2ctgj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:50:54 crc kubenswrapper[5120]: I0122 11:50:54.995930 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a52d1c0-c55c-47b4-936e-a783304a0e89-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5a52d1c0-c55c-47b4-936e-a783304a0e89" (UID: "5a52d1c0-c55c-47b4-936e-a783304a0e89"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:50:55 crc kubenswrapper[5120]: I0122 11:50:55.081274 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a52d1c0-c55c-47b4-936e-a783304a0e89-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:50:55 crc kubenswrapper[5120]: I0122 11:50:55.081319 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a52d1c0-c55c-47b4-936e-a783304a0e89-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:50:55 crc kubenswrapper[5120]: I0122 11:50:55.081332 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2ctgj\" (UniqueName: \"kubernetes.io/projected/5a52d1c0-c55c-47b4-936e-a783304a0e89-kube-api-access-2ctgj\") on node \"crc\" DevicePath \"\"" Jan 22 11:50:55 crc kubenswrapper[5120]: I0122 11:50:55.128880 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Jan 22 11:50:55 crc kubenswrapper[5120]: I0122 11:50:55.375078 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"f30ae543-bf57-4bbc-9c40-25ceab4603c6","Type":"ContainerStarted","Data":"c47f56a7ba94352bdbc302b5089a5a57c1a67692d87e9c910901f243c667c377"} Jan 22 11:50:55 crc kubenswrapper[5120]: I0122 11:50:55.378439 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z5nvn" event={"ID":"5a52d1c0-c55c-47b4-936e-a783304a0e89","Type":"ContainerDied","Data":"78c9a69e1fa99c2e87a7582c593ca2b6cefde510daa7b05fc0d9db0261917a2a"} Jan 22 11:50:55 crc kubenswrapper[5120]: I0122 11:50:55.378529 5120 scope.go:117] "RemoveContainer" containerID="e94ae6f6e61790076393376b71522698c65d8d872bdfe197441f1ede23e779f0" Jan 22 11:50:55 crc kubenswrapper[5120]: I0122 11:50:55.378635 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z5nvn" Jan 22 11:50:55 crc kubenswrapper[5120]: I0122 11:50:55.398296 5120 scope.go:117] "RemoveContainer" containerID="04e86588d8fba653a7e46769775e0363411492a2faa05c1b5793a39fc530062e" Jan 22 11:50:55 crc kubenswrapper[5120]: I0122 11:50:55.412166 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-z5nvn"] Jan 22 11:50:55 crc kubenswrapper[5120]: I0122 11:50:55.417507 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-z5nvn"] Jan 22 11:50:55 crc kubenswrapper[5120]: I0122 11:50:55.437979 5120 scope.go:117] "RemoveContainer" containerID="f7fd7cbfe79a1adebb0cfbd3dc66028444cc6622806f14ca6c6694184f1c03cf" Jan 22 11:50:55 crc kubenswrapper[5120]: I0122 11:50:55.578558 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a52d1c0-c55c-47b4-936e-a783304a0e89" path="/var/lib/kubelet/pods/5a52d1c0-c55c-47b4-936e-a783304a0e89/volumes" Jan 22 11:50:56 crc kubenswrapper[5120]: I0122 11:50:56.386378 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"f30ae543-bf57-4bbc-9c40-25ceab4603c6","Type":"ContainerStarted","Data":"d4c4f24e5c9a48752758f6dcf933d24a1e6486cd93edc80fe0fcd4be8d8e0255"} Jan 22 11:50:56 crc kubenswrapper[5120]: I0122 11:50:56.418463 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-12-crc" podStartSLOduration=2.418432547 podStartE2EDuration="2.418432547s" podCreationTimestamp="2026-01-22 11:50:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:56.412737349 +0000 UTC m=+191.156685690" watchObservedRunningTime="2026-01-22 11:50:56.418432547 +0000 UTC m=+191.162380888" Jan 22 11:50:56 crc kubenswrapper[5120]: E0122 11:50:56.903341 5120 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a52d1c0_c55c_47b4_936e_a783304a0e89.slice/crio-04e86588d8fba653a7e46769775e0363411492a2faa05c1b5793a39fc530062e.scope\": RecentStats: unable to find data in memory cache]" Jan 22 11:50:58 crc kubenswrapper[5120]: I0122 11:50:58.633005 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2q8d8" Jan 22 11:50:58 crc kubenswrapper[5120]: I0122 11:50:58.683670 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2q8d8" Jan 22 11:50:58 crc kubenswrapper[5120]: I0122 11:50:58.837384 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tbgcq" Jan 22 11:50:58 crc kubenswrapper[5120]: I0122 11:50:58.877319 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tbgcq" Jan 22 11:50:59 crc kubenswrapper[5120]: I0122 11:50:59.842917 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tbgcq"] Jan 22 11:51:00 crc kubenswrapper[5120]: I0122 11:51:00.381402 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-p26dp" Jan 22 11:51:00 crc kubenswrapper[5120]: I0122 11:51:00.382513 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fztfm" Jan 22 11:51:00 crc kubenswrapper[5120]: I0122 11:51:00.416484 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tbgcq" podUID="3e95505c-a7eb-4d9f-be2f-e7129e3643b8" containerName="registry-server" containerID="cri-o://f4f7bc0583697b2f695f6f1c26c7ce5ff64e708099c05083dc3b1510e1605486" gracePeriod=2 Jan 22 11:51:01 crc kubenswrapper[5120]: I0122 11:51:01.595523 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-t67f7" Jan 22 11:51:01 crc kubenswrapper[5120]: I0122 11:51:01.637042 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-t67f7" Jan 22 11:51:01 crc kubenswrapper[5120]: I0122 11:51:01.884450 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mbm7w" Jan 22 11:51:01 crc kubenswrapper[5120]: I0122 11:51:01.938207 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mbm7w" Jan 22 11:51:02 crc kubenswrapper[5120]: I0122 11:51:02.649512 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p26dp"] Jan 22 11:51:02 crc kubenswrapper[5120]: I0122 11:51:02.650690 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-p26dp" podUID="089fc2c1-8274-4532-a14a-21194d01a310" containerName="registry-server" containerID="cri-o://a3a3097fd4339ce32794c09b0be56788819c79a81ede80e9fdec2115b13052f2" gracePeriod=2 Jan 22 11:51:04 crc kubenswrapper[5120]: I0122 11:51:04.043640 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mbm7w"] Jan 22 11:51:04 crc kubenswrapper[5120]: I0122 11:51:04.043948 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mbm7w" podUID="fda19cab-4c2e-47a2-993c-ce6f3795e561" containerName="registry-server" containerID="cri-o://f06bad76aa0a0af81a23a0c7892445f4237f1858924bdaae4e0635ae65173fe2" gracePeriod=2 Jan 22 11:51:04 crc kubenswrapper[5120]: I0122 11:51:04.438998 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tbgcq_3e95505c-a7eb-4d9f-be2f-e7129e3643b8/registry-server/0.log" Jan 22 11:51:04 crc kubenswrapper[5120]: I0122 11:51:04.440399 5120 generic.go:358] "Generic (PLEG): container finished" podID="3e95505c-a7eb-4d9f-be2f-e7129e3643b8" containerID="f4f7bc0583697b2f695f6f1c26c7ce5ff64e708099c05083dc3b1510e1605486" exitCode=137 Jan 22 11:51:04 crc kubenswrapper[5120]: I0122 11:51:04.440473 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tbgcq" event={"ID":"3e95505c-a7eb-4d9f-be2f-e7129e3643b8","Type":"ContainerDied","Data":"f4f7bc0583697b2f695f6f1c26c7ce5ff64e708099c05083dc3b1510e1605486"} Jan 22 11:51:04 crc kubenswrapper[5120]: I0122 11:51:04.442877 5120 generic.go:358] "Generic (PLEG): container finished" podID="089fc2c1-8274-4532-a14a-21194d01a310" containerID="a3a3097fd4339ce32794c09b0be56788819c79a81ede80e9fdec2115b13052f2" exitCode=0 Jan 22 11:51:04 crc kubenswrapper[5120]: I0122 11:51:04.442913 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p26dp" event={"ID":"089fc2c1-8274-4532-a14a-21194d01a310","Type":"ContainerDied","Data":"a3a3097fd4339ce32794c09b0be56788819c79a81ede80e9fdec2115b13052f2"} Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.001820 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p26dp" Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.046450 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/089fc2c1-8274-4532-a14a-21194d01a310-utilities\") pod \"089fc2c1-8274-4532-a14a-21194d01a310\" (UID: \"089fc2c1-8274-4532-a14a-21194d01a310\") " Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.046573 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/089fc2c1-8274-4532-a14a-21194d01a310-catalog-content\") pod \"089fc2c1-8274-4532-a14a-21194d01a310\" (UID: \"089fc2c1-8274-4532-a14a-21194d01a310\") " Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.046693 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gp5qp\" (UniqueName: \"kubernetes.io/projected/089fc2c1-8274-4532-a14a-21194d01a310-kube-api-access-gp5qp\") pod \"089fc2c1-8274-4532-a14a-21194d01a310\" (UID: \"089fc2c1-8274-4532-a14a-21194d01a310\") " Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.050196 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/089fc2c1-8274-4532-a14a-21194d01a310-utilities" (OuterVolumeSpecName: "utilities") pod "089fc2c1-8274-4532-a14a-21194d01a310" (UID: "089fc2c1-8274-4532-a14a-21194d01a310"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.063110 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/089fc2c1-8274-4532-a14a-21194d01a310-kube-api-access-gp5qp" (OuterVolumeSpecName: "kube-api-access-gp5qp") pod "089fc2c1-8274-4532-a14a-21194d01a310" (UID: "089fc2c1-8274-4532-a14a-21194d01a310"). InnerVolumeSpecName "kube-api-access-gp5qp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.087483 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/089fc2c1-8274-4532-a14a-21194d01a310-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "089fc2c1-8274-4532-a14a-21194d01a310" (UID: "089fc2c1-8274-4532-a14a-21194d01a310"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.119772 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tbgcq_3e95505c-a7eb-4d9f-be2f-e7129e3643b8/registry-server/0.log" Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.120585 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tbgcq" Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.149904 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e95505c-a7eb-4d9f-be2f-e7129e3643b8-utilities\") pod \"3e95505c-a7eb-4d9f-be2f-e7129e3643b8\" (UID: \"3e95505c-a7eb-4d9f-be2f-e7129e3643b8\") " Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.150005 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqkzj\" (UniqueName: \"kubernetes.io/projected/3e95505c-a7eb-4d9f-be2f-e7129e3643b8-kube-api-access-zqkzj\") pod \"3e95505c-a7eb-4d9f-be2f-e7129e3643b8\" (UID: \"3e95505c-a7eb-4d9f-be2f-e7129e3643b8\") " Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.150176 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e95505c-a7eb-4d9f-be2f-e7129e3643b8-catalog-content\") pod \"3e95505c-a7eb-4d9f-be2f-e7129e3643b8\" (UID: \"3e95505c-a7eb-4d9f-be2f-e7129e3643b8\") " Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.150477 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gp5qp\" (UniqueName: \"kubernetes.io/projected/089fc2c1-8274-4532-a14a-21194d01a310-kube-api-access-gp5qp\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.150503 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/089fc2c1-8274-4532-a14a-21194d01a310-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.150514 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/089fc2c1-8274-4532-a14a-21194d01a310-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.151171 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e95505c-a7eb-4d9f-be2f-e7129e3643b8-utilities" (OuterVolumeSpecName: "utilities") pod "3e95505c-a7eb-4d9f-be2f-e7129e3643b8" (UID: "3e95505c-a7eb-4d9f-be2f-e7129e3643b8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.157234 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e95505c-a7eb-4d9f-be2f-e7129e3643b8-kube-api-access-zqkzj" (OuterVolumeSpecName: "kube-api-access-zqkzj") pod "3e95505c-a7eb-4d9f-be2f-e7129e3643b8" (UID: "3e95505c-a7eb-4d9f-be2f-e7129e3643b8"). InnerVolumeSpecName "kube-api-access-zqkzj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.210349 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e95505c-a7eb-4d9f-be2f-e7129e3643b8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3e95505c-a7eb-4d9f-be2f-e7129e3643b8" (UID: "3e95505c-a7eb-4d9f-be2f-e7129e3643b8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.253020 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e95505c-a7eb-4d9f-be2f-e7129e3643b8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.253523 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e95505c-a7eb-4d9f-be2f-e7129e3643b8-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.253617 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zqkzj\" (UniqueName: \"kubernetes.io/projected/3e95505c-a7eb-4d9f-be2f-e7129e3643b8-kube-api-access-zqkzj\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.451589 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tbgcq_3e95505c-a7eb-4d9f-be2f-e7129e3643b8/registry-server/0.log" Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.455695 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tbgcq" Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.456058 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tbgcq" event={"ID":"3e95505c-a7eb-4d9f-be2f-e7129e3643b8","Type":"ContainerDied","Data":"d7e449df56d4aa55bd535980c4c65253f3325cde543e24f2634b3227e292a791"} Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.456340 5120 scope.go:117] "RemoveContainer" containerID="f4f7bc0583697b2f695f6f1c26c7ce5ff64e708099c05083dc3b1510e1605486" Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.461900 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p26dp" Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.461901 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p26dp" event={"ID":"089fc2c1-8274-4532-a14a-21194d01a310","Type":"ContainerDied","Data":"408feb4598d3b1d5ae322e87417dab316fa1b75c632f7ace01cbd6d89c0b3941"} Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.472223 5120 generic.go:358] "Generic (PLEG): container finished" podID="fda19cab-4c2e-47a2-993c-ce6f3795e561" containerID="f06bad76aa0a0af81a23a0c7892445f4237f1858924bdaae4e0635ae65173fe2" exitCode=0 Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.472396 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mbm7w" event={"ID":"fda19cab-4c2e-47a2-993c-ce6f3795e561","Type":"ContainerDied","Data":"f06bad76aa0a0af81a23a0c7892445f4237f1858924bdaae4e0635ae65173fe2"} Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.475008 5120 scope.go:117] "RemoveContainer" containerID="c75872699b265f647f93429326d1a8652dfa1cbe0ac2767c1c24f307072383a1" Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.499643 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mbm7w" Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.509276 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tbgcq"] Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.511681 5120 scope.go:117] "RemoveContainer" containerID="7de27767f0a768c4d8be8f2a9463a108ad7455645c4ac170a6ce680c9ed560d4" Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.512625 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tbgcq"] Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.549288 5120 scope.go:117] "RemoveContainer" containerID="a3a3097fd4339ce32794c09b0be56788819c79a81ede80e9fdec2115b13052f2" Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.553579 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p26dp"] Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.556014 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-p26dp"] Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.558662 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fda19cab-4c2e-47a2-993c-ce6f3795e561-utilities\") pod \"fda19cab-4c2e-47a2-993c-ce6f3795e561\" (UID: \"fda19cab-4c2e-47a2-993c-ce6f3795e561\") " Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.558779 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fda19cab-4c2e-47a2-993c-ce6f3795e561-catalog-content\") pod \"fda19cab-4c2e-47a2-993c-ce6f3795e561\" (UID: \"fda19cab-4c2e-47a2-993c-ce6f3795e561\") " Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.558882 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmxxj\" (UniqueName: \"kubernetes.io/projected/fda19cab-4c2e-47a2-993c-ce6f3795e561-kube-api-access-lmxxj\") pod \"fda19cab-4c2e-47a2-993c-ce6f3795e561\" (UID: \"fda19cab-4c2e-47a2-993c-ce6f3795e561\") " Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.559915 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fda19cab-4c2e-47a2-993c-ce6f3795e561-utilities" (OuterVolumeSpecName: "utilities") pod "fda19cab-4c2e-47a2-993c-ce6f3795e561" (UID: "fda19cab-4c2e-47a2-993c-ce6f3795e561"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.564977 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda19cab-4c2e-47a2-993c-ce6f3795e561-kube-api-access-lmxxj" (OuterVolumeSpecName: "kube-api-access-lmxxj") pod "fda19cab-4c2e-47a2-993c-ce6f3795e561" (UID: "fda19cab-4c2e-47a2-993c-ce6f3795e561"). InnerVolumeSpecName "kube-api-access-lmxxj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.566062 5120 scope.go:117] "RemoveContainer" containerID="9bc291a555447cad49a14283506bdb0035ead9ce2860615680f3af52e9dceda9" Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.582128 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="089fc2c1-8274-4532-a14a-21194d01a310" path="/var/lib/kubelet/pods/089fc2c1-8274-4532-a14a-21194d01a310/volumes" Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.583135 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e95505c-a7eb-4d9f-be2f-e7129e3643b8" path="/var/lib/kubelet/pods/3e95505c-a7eb-4d9f-be2f-e7129e3643b8/volumes" Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.583771 5120 scope.go:117] "RemoveContainer" containerID="8c8add6d6346bffb920d193189f09708f0ce72391c85a3b8f9fe5d165b2e4b5d" Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.662198 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lmxxj\" (UniqueName: \"kubernetes.io/projected/fda19cab-4c2e-47a2-993c-ce6f3795e561-kube-api-access-lmxxj\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.662224 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fda19cab-4c2e-47a2-993c-ce6f3795e561-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.665340 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fda19cab-4c2e-47a2-993c-ce6f3795e561-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fda19cab-4c2e-47a2-993c-ce6f3795e561" (UID: "fda19cab-4c2e-47a2-993c-ce6f3795e561"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.764272 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fda19cab-4c2e-47a2-993c-ce6f3795e561-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:06 crc kubenswrapper[5120]: I0122 11:51:06.486143 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mbm7w" event={"ID":"fda19cab-4c2e-47a2-993c-ce6f3795e561","Type":"ContainerDied","Data":"f088b06a5bed8fcb72cf992ec4dfa09770bed17e70fa6aa78bd0452016efb6e5"} Jan 22 11:51:06 crc kubenswrapper[5120]: I0122 11:51:06.486617 5120 scope.go:117] "RemoveContainer" containerID="f06bad76aa0a0af81a23a0c7892445f4237f1858924bdaae4e0635ae65173fe2" Jan 22 11:51:06 crc kubenswrapper[5120]: I0122 11:51:06.486202 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mbm7w" Jan 22 11:51:06 crc kubenswrapper[5120]: I0122 11:51:06.522928 5120 scope.go:117] "RemoveContainer" containerID="0f93aadd0112a21eacebe8630496cabe8f22f4bbdfd32043b156cba561df7b59" Jan 22 11:51:06 crc kubenswrapper[5120]: I0122 11:51:06.527670 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mbm7w"] Jan 22 11:51:06 crc kubenswrapper[5120]: I0122 11:51:06.529797 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mbm7w"] Jan 22 11:51:06 crc kubenswrapper[5120]: I0122 11:51:06.545377 5120 scope.go:117] "RemoveContainer" containerID="225b2e979aa1449106827d89e2af943939a02a67507731955126d01302822780" Jan 22 11:51:07 crc kubenswrapper[5120]: E0122 11:51:07.029030 5120 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a52d1c0_c55c_47b4_936e_a783304a0e89.slice/crio-04e86588d8fba653a7e46769775e0363411492a2faa05c1b5793a39fc530062e.scope\": RecentStats: unable to find data in memory cache]" Jan 22 11:51:07 crc kubenswrapper[5120]: I0122 11:51:07.595073 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda19cab-4c2e-47a2-993c-ce6f3795e561" path="/var/lib/kubelet/pods/fda19cab-4c2e-47a2-993c-ce6f3795e561/volumes" Jan 22 11:51:17 crc kubenswrapper[5120]: E0122 11:51:17.141015 5120 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a52d1c0_c55c_47b4_936e_a783304a0e89.slice/crio-04e86588d8fba653a7e46769775e0363411492a2faa05c1b5793a39fc530062e.scope\": RecentStats: unable to find data in memory cache]" Jan 22 11:51:18 crc kubenswrapper[5120]: I0122 11:51:18.842027 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-25dsq"] Jan 22 11:51:22 crc kubenswrapper[5120]: I0122 11:51:22.183109 5120 ???:1] "http: TLS handshake error from 192.168.126.11:35912: no serving certificate available for the kubelet" Jan 22 11:51:27 crc kubenswrapper[5120]: E0122 11:51:27.278725 5120 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a52d1c0_c55c_47b4_936e_a783304a0e89.slice/crio-04e86588d8fba653a7e46769775e0363411492a2faa05c1b5793a39fc530062e.scope\": RecentStats: unable to find data in memory cache]" Jan 22 11:51:31 crc kubenswrapper[5120]: I0122 11:51:31.973009 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:51:31 crc kubenswrapper[5120]: I0122 11:51:31.973638 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.527931 5120 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.529266 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3e95505c-a7eb-4d9f-be2f-e7129e3643b8" containerName="extract-content" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.529354 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e95505c-a7eb-4d9f-be2f-e7129e3643b8" containerName="extract-content" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.529419 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fda19cab-4c2e-47a2-993c-ce6f3795e561" containerName="extract-content" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.529475 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="fda19cab-4c2e-47a2-993c-ce6f3795e561" containerName="extract-content" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.529543 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="089fc2c1-8274-4532-a14a-21194d01a310" containerName="extract-content" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.529599 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="089fc2c1-8274-4532-a14a-21194d01a310" containerName="extract-content" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.529660 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="089fc2c1-8274-4532-a14a-21194d01a310" containerName="extract-utilities" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.529717 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="089fc2c1-8274-4532-a14a-21194d01a310" containerName="extract-utilities" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.529779 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="089fc2c1-8274-4532-a14a-21194d01a310" containerName="registry-server" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.529836 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="089fc2c1-8274-4532-a14a-21194d01a310" containerName="registry-server" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.529896 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fda19cab-4c2e-47a2-993c-ce6f3795e561" containerName="registry-server" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.530021 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="fda19cab-4c2e-47a2-993c-ce6f3795e561" containerName="registry-server" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.530086 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3e95505c-a7eb-4d9f-be2f-e7129e3643b8" containerName="extract-utilities" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.530143 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e95505c-a7eb-4d9f-be2f-e7129e3643b8" containerName="extract-utilities" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.530207 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5a52d1c0-c55c-47b4-936e-a783304a0e89" containerName="extract-utilities" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.530264 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a52d1c0-c55c-47b4-936e-a783304a0e89" containerName="extract-utilities" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.530320 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3e95505c-a7eb-4d9f-be2f-e7129e3643b8" containerName="registry-server" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.530379 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e95505c-a7eb-4d9f-be2f-e7129e3643b8" containerName="registry-server" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.530435 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fda19cab-4c2e-47a2-993c-ce6f3795e561" containerName="extract-utilities" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.530491 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="fda19cab-4c2e-47a2-993c-ce6f3795e561" containerName="extract-utilities" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.530541 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5a52d1c0-c55c-47b4-936e-a783304a0e89" containerName="extract-content" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.530594 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a52d1c0-c55c-47b4-936e-a783304a0e89" containerName="extract-content" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.530679 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5a52d1c0-c55c-47b4-936e-a783304a0e89" containerName="registry-server" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.530738 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a52d1c0-c55c-47b4-936e-a783304a0e89" containerName="registry-server" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.530901 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="089fc2c1-8274-4532-a14a-21194d01a310" containerName="registry-server" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.530983 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="5a52d1c0-c55c-47b4-936e-a783304a0e89" containerName="registry-server" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.531050 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="3e95505c-a7eb-4d9f-be2f-e7129e3643b8" containerName="registry-server" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.531118 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="fda19cab-4c2e-47a2-993c-ce6f3795e561" containerName="registry-server" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.687317 5120 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.687862 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.688884 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" containerID="cri-o://911cf90f454467de717e1f9bb20b825a5be262103e70d8507cf0069f6044f56c" gracePeriod=15 Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.689106 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://fbc73082c8fc6e4c53f063e1d1446fff9c541a208f3ab11d7c687b5b06958ebc" gracePeriod=15 Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.689149 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" containerID="cri-o://79545e3bdfa141cbd330789b3726a926a352dee430ef750fa2a4adffc6f4f17b" gracePeriod=15 Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.689195 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://3ebb490a3adef5a0bb92ba36215125157bd696a19543743e029f6ef8d7ddaf9f" gracePeriod=15 Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.689215 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" containerID="cri-o://64d17043c5bd9fe7e126416520a376da7a3779ed00b20eb4d36e1651e0e4deb1" gracePeriod=15 Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.706188 5120 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.707211 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.707226 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.707237 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.707244 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.707257 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.707263 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.707277 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.707499 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.707510 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.707516 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.707525 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.707531 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.707540 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.707547 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.707563 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.707568 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.707937 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.707961 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.707971 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.707985 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.707992 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.707998 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.708004 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.708013 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.708126 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.708137 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.708244 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.708358 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.708365 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.773561 5120 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 11:51:33 crc kubenswrapper[5120]: E0122 11:51:33.774551 5120 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.36:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.782336 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.782380 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.782409 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.782456 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.782529 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.782695 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.782750 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.782792 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.782819 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.782882 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.883806 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.883854 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.883877 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.883896 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.884015 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.884078 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.884016 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.884098 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.884246 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.884302 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.884342 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.884386 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.884471 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.884485 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.884544 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.884572 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.884626 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.884664 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.884875 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.885017 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:51:34 crc kubenswrapper[5120]: I0122 11:51:34.075336 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 11:51:34 crc kubenswrapper[5120]: E0122 11:51:34.102251 5120 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.36:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188d0b565838ce2c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:51:34.101683756 +0000 UTC m=+228.845632137,LastTimestamp:2026-01-22 11:51:34.101683756 +0000 UTC m=+228.845632137,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:51:34 crc kubenswrapper[5120]: E0122 11:51:34.625519 5120 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.36:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188d0b565838ce2c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:51:34.101683756 +0000 UTC m=+228.845632137,LastTimestamp:2026-01-22 11:51:34.101683756 +0000 UTC m=+228.845632137,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:51:34 crc kubenswrapper[5120]: I0122 11:51:34.685900 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"a8463e343cc5ae2c432dc371c37cafeb5cfd870e6bf3b62821dbcd1658194ee4"} Jan 22 11:51:34 crc kubenswrapper[5120]: I0122 11:51:34.685994 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"93b89363df3ac6ad673e5ae755b2fab3bc9dad346d982ed1e9e6e0b8559055f7"} Jan 22 11:51:34 crc kubenswrapper[5120]: I0122 11:51:34.686401 5120 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 11:51:34 crc kubenswrapper[5120]: E0122 11:51:34.687117 5120 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.36:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 11:51:34 crc kubenswrapper[5120]: I0122 11:51:34.688540 5120 generic.go:358] "Generic (PLEG): container finished" podID="f30ae543-bf57-4bbc-9c40-25ceab4603c6" containerID="d4c4f24e5c9a48752758f6dcf933d24a1e6486cd93edc80fe0fcd4be8d8e0255" exitCode=0 Jan 22 11:51:34 crc kubenswrapper[5120]: I0122 11:51:34.688653 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"f30ae543-bf57-4bbc-9c40-25ceab4603c6","Type":"ContainerDied","Data":"d4c4f24e5c9a48752758f6dcf933d24a1e6486cd93edc80fe0fcd4be8d8e0255"} Jan 22 11:51:34 crc kubenswrapper[5120]: I0122 11:51:34.689695 5120 status_manager.go:895] "Failed to get status for pod" podUID="f30ae543-bf57-4bbc-9c40-25ceab4603c6" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 22 11:51:34 crc kubenswrapper[5120]: I0122 11:51:34.690779 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 22 11:51:34 crc kubenswrapper[5120]: I0122 11:51:34.692286 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 22 11:51:34 crc kubenswrapper[5120]: I0122 11:51:34.692909 5120 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="79545e3bdfa141cbd330789b3726a926a352dee430ef750fa2a4adffc6f4f17b" exitCode=0 Jan 22 11:51:34 crc kubenswrapper[5120]: I0122 11:51:34.692936 5120 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="fbc73082c8fc6e4c53f063e1d1446fff9c541a208f3ab11d7c687b5b06958ebc" exitCode=0 Jan 22 11:51:34 crc kubenswrapper[5120]: I0122 11:51:34.692944 5120 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="3ebb490a3adef5a0bb92ba36215125157bd696a19543743e029f6ef8d7ddaf9f" exitCode=0 Jan 22 11:51:34 crc kubenswrapper[5120]: I0122 11:51:34.692974 5120 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="64d17043c5bd9fe7e126416520a376da7a3779ed00b20eb4d36e1651e0e4deb1" exitCode=2 Jan 22 11:51:34 crc kubenswrapper[5120]: I0122 11:51:34.693023 5120 scope.go:117] "RemoveContainer" containerID="99b634350c36056ac94a43bb1050fb0a41c21441966a10fdfe3aeae30cfd0c2f" Jan 22 11:51:35 crc kubenswrapper[5120]: I0122 11:51:35.574949 5120 status_manager.go:895] "Failed to get status for pod" podUID="f30ae543-bf57-4bbc-9c40-25ceab4603c6" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 22 11:51:35 crc kubenswrapper[5120]: I0122 11:51:35.709488 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.038804 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.040072 5120 status_manager.go:895] "Failed to get status for pod" podUID="f30ae543-bf57-4bbc-9c40-25ceab4603c6" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.118107 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f30ae543-bf57-4bbc-9c40-25ceab4603c6-kubelet-dir\") pod \"f30ae543-bf57-4bbc-9c40-25ceab4603c6\" (UID: \"f30ae543-bf57-4bbc-9c40-25ceab4603c6\") " Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.118315 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f30ae543-bf57-4bbc-9c40-25ceab4603c6-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f30ae543-bf57-4bbc-9c40-25ceab4603c6" (UID: "f30ae543-bf57-4bbc-9c40-25ceab4603c6"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.118464 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f30ae543-bf57-4bbc-9c40-25ceab4603c6-var-lock\") pod \"f30ae543-bf57-4bbc-9c40-25ceab4603c6\" (UID: \"f30ae543-bf57-4bbc-9c40-25ceab4603c6\") " Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.118631 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f30ae543-bf57-4bbc-9c40-25ceab4603c6-kube-api-access\") pod \"f30ae543-bf57-4bbc-9c40-25ceab4603c6\" (UID: \"f30ae543-bf57-4bbc-9c40-25ceab4603c6\") " Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.118543 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f30ae543-bf57-4bbc-9c40-25ceab4603c6-var-lock" (OuterVolumeSpecName: "var-lock") pod "f30ae543-bf57-4bbc-9c40-25ceab4603c6" (UID: "f30ae543-bf57-4bbc-9c40-25ceab4603c6"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.118964 5120 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f30ae543-bf57-4bbc-9c40-25ceab4603c6-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.119050 5120 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f30ae543-bf57-4bbc-9c40-25ceab4603c6-var-lock\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.126582 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f30ae543-bf57-4bbc-9c40-25ceab4603c6-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f30ae543-bf57-4bbc-9c40-25ceab4603c6" (UID: "f30ae543-bf57-4bbc-9c40-25ceab4603c6"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.220047 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f30ae543-bf57-4bbc-9c40-25ceab4603c6-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.558119 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.559087 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.559771 5120 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.560225 5120 status_manager.go:895] "Failed to get status for pod" podUID="f30ae543-bf57-4bbc-9c40-25ceab4603c6" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.623097 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.623179 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.623213 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.623270 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.623351 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.623400 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.623397 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.623452 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.624238 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" (OuterVolumeSpecName: "ca-bundle-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "ca-bundle-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.624333 5120 reconciler_common.go:299] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.624356 5120 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.624370 5120 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.629725 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.722982 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.723657 5120 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="911cf90f454467de717e1f9bb20b825a5be262103e70d8507cf0069f6044f56c" exitCode=0 Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.723947 5120 scope.go:117] "RemoveContainer" containerID="79545e3bdfa141cbd330789b3726a926a352dee430ef750fa2a4adffc6f4f17b" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.724102 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.726081 5120 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.726111 5120 reconciler_common.go:299] "Volume detached for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.732182 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"f30ae543-bf57-4bbc-9c40-25ceab4603c6","Type":"ContainerDied","Data":"c47f56a7ba94352bdbc302b5089a5a57c1a67692d87e9c910901f243c667c377"} Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.732241 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c47f56a7ba94352bdbc302b5089a5a57c1a67692d87e9c910901f243c667c377" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.732413 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.750413 5120 scope.go:117] "RemoveContainer" containerID="fbc73082c8fc6e4c53f063e1d1446fff9c541a208f3ab11d7c687b5b06958ebc" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.753168 5120 status_manager.go:895] "Failed to get status for pod" podUID="f30ae543-bf57-4bbc-9c40-25ceab4603c6" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.753575 5120 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.758540 5120 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.763649 5120 status_manager.go:895] "Failed to get status for pod" podUID="f30ae543-bf57-4bbc-9c40-25ceab4603c6" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.768257 5120 scope.go:117] "RemoveContainer" containerID="3ebb490a3adef5a0bb92ba36215125157bd696a19543743e029f6ef8d7ddaf9f" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.786950 5120 scope.go:117] "RemoveContainer" containerID="64d17043c5bd9fe7e126416520a376da7a3779ed00b20eb4d36e1651e0e4deb1" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.801901 5120 scope.go:117] "RemoveContainer" containerID="911cf90f454467de717e1f9bb20b825a5be262103e70d8507cf0069f6044f56c" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.825588 5120 scope.go:117] "RemoveContainer" containerID="8940990b4eeab47177be3a76a9fc4894d28308e94e4c45050915ec740b778a41" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.880987 5120 scope.go:117] "RemoveContainer" containerID="79545e3bdfa141cbd330789b3726a926a352dee430ef750fa2a4adffc6f4f17b" Jan 22 11:51:36 crc kubenswrapper[5120]: E0122 11:51:36.881506 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79545e3bdfa141cbd330789b3726a926a352dee430ef750fa2a4adffc6f4f17b\": container with ID starting with 79545e3bdfa141cbd330789b3726a926a352dee430ef750fa2a4adffc6f4f17b not found: ID does not exist" containerID="79545e3bdfa141cbd330789b3726a926a352dee430ef750fa2a4adffc6f4f17b" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.881555 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79545e3bdfa141cbd330789b3726a926a352dee430ef750fa2a4adffc6f4f17b"} err="failed to get container status \"79545e3bdfa141cbd330789b3726a926a352dee430ef750fa2a4adffc6f4f17b\": rpc error: code = NotFound desc = could not find container \"79545e3bdfa141cbd330789b3726a926a352dee430ef750fa2a4adffc6f4f17b\": container with ID starting with 79545e3bdfa141cbd330789b3726a926a352dee430ef750fa2a4adffc6f4f17b not found: ID does not exist" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.881612 5120 scope.go:117] "RemoveContainer" containerID="fbc73082c8fc6e4c53f063e1d1446fff9c541a208f3ab11d7c687b5b06958ebc" Jan 22 11:51:36 crc kubenswrapper[5120]: E0122 11:51:36.882026 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbc73082c8fc6e4c53f063e1d1446fff9c541a208f3ab11d7c687b5b06958ebc\": container with ID starting with fbc73082c8fc6e4c53f063e1d1446fff9c541a208f3ab11d7c687b5b06958ebc not found: ID does not exist" containerID="fbc73082c8fc6e4c53f063e1d1446fff9c541a208f3ab11d7c687b5b06958ebc" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.882061 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbc73082c8fc6e4c53f063e1d1446fff9c541a208f3ab11d7c687b5b06958ebc"} err="failed to get container status \"fbc73082c8fc6e4c53f063e1d1446fff9c541a208f3ab11d7c687b5b06958ebc\": rpc error: code = NotFound desc = could not find container \"fbc73082c8fc6e4c53f063e1d1446fff9c541a208f3ab11d7c687b5b06958ebc\": container with ID starting with fbc73082c8fc6e4c53f063e1d1446fff9c541a208f3ab11d7c687b5b06958ebc not found: ID does not exist" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.882074 5120 scope.go:117] "RemoveContainer" containerID="3ebb490a3adef5a0bb92ba36215125157bd696a19543743e029f6ef8d7ddaf9f" Jan 22 11:51:36 crc kubenswrapper[5120]: E0122 11:51:36.882394 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ebb490a3adef5a0bb92ba36215125157bd696a19543743e029f6ef8d7ddaf9f\": container with ID starting with 3ebb490a3adef5a0bb92ba36215125157bd696a19543743e029f6ef8d7ddaf9f not found: ID does not exist" containerID="3ebb490a3adef5a0bb92ba36215125157bd696a19543743e029f6ef8d7ddaf9f" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.882421 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ebb490a3adef5a0bb92ba36215125157bd696a19543743e029f6ef8d7ddaf9f"} err="failed to get container status \"3ebb490a3adef5a0bb92ba36215125157bd696a19543743e029f6ef8d7ddaf9f\": rpc error: code = NotFound desc = could not find container \"3ebb490a3adef5a0bb92ba36215125157bd696a19543743e029f6ef8d7ddaf9f\": container with ID starting with 3ebb490a3adef5a0bb92ba36215125157bd696a19543743e029f6ef8d7ddaf9f not found: ID does not exist" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.882437 5120 scope.go:117] "RemoveContainer" containerID="64d17043c5bd9fe7e126416520a376da7a3779ed00b20eb4d36e1651e0e4deb1" Jan 22 11:51:36 crc kubenswrapper[5120]: E0122 11:51:36.882718 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64d17043c5bd9fe7e126416520a376da7a3779ed00b20eb4d36e1651e0e4deb1\": container with ID starting with 64d17043c5bd9fe7e126416520a376da7a3779ed00b20eb4d36e1651e0e4deb1 not found: ID does not exist" containerID="64d17043c5bd9fe7e126416520a376da7a3779ed00b20eb4d36e1651e0e4deb1" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.882744 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64d17043c5bd9fe7e126416520a376da7a3779ed00b20eb4d36e1651e0e4deb1"} err="failed to get container status \"64d17043c5bd9fe7e126416520a376da7a3779ed00b20eb4d36e1651e0e4deb1\": rpc error: code = NotFound desc = could not find container \"64d17043c5bd9fe7e126416520a376da7a3779ed00b20eb4d36e1651e0e4deb1\": container with ID starting with 64d17043c5bd9fe7e126416520a376da7a3779ed00b20eb4d36e1651e0e4deb1 not found: ID does not exist" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.882759 5120 scope.go:117] "RemoveContainer" containerID="911cf90f454467de717e1f9bb20b825a5be262103e70d8507cf0069f6044f56c" Jan 22 11:51:36 crc kubenswrapper[5120]: E0122 11:51:36.883133 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"911cf90f454467de717e1f9bb20b825a5be262103e70d8507cf0069f6044f56c\": container with ID starting with 911cf90f454467de717e1f9bb20b825a5be262103e70d8507cf0069f6044f56c not found: ID does not exist" containerID="911cf90f454467de717e1f9bb20b825a5be262103e70d8507cf0069f6044f56c" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.883244 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"911cf90f454467de717e1f9bb20b825a5be262103e70d8507cf0069f6044f56c"} err="failed to get container status \"911cf90f454467de717e1f9bb20b825a5be262103e70d8507cf0069f6044f56c\": rpc error: code = NotFound desc = could not find container \"911cf90f454467de717e1f9bb20b825a5be262103e70d8507cf0069f6044f56c\": container with ID starting with 911cf90f454467de717e1f9bb20b825a5be262103e70d8507cf0069f6044f56c not found: ID does not exist" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.883287 5120 scope.go:117] "RemoveContainer" containerID="8940990b4eeab47177be3a76a9fc4894d28308e94e4c45050915ec740b778a41" Jan 22 11:51:36 crc kubenswrapper[5120]: E0122 11:51:36.884028 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8940990b4eeab47177be3a76a9fc4894d28308e94e4c45050915ec740b778a41\": container with ID starting with 8940990b4eeab47177be3a76a9fc4894d28308e94e4c45050915ec740b778a41 not found: ID does not exist" containerID="8940990b4eeab47177be3a76a9fc4894d28308e94e4c45050915ec740b778a41" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.884062 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8940990b4eeab47177be3a76a9fc4894d28308e94e4c45050915ec740b778a41"} err="failed to get container status \"8940990b4eeab47177be3a76a9fc4894d28308e94e4c45050915ec740b778a41\": rpc error: code = NotFound desc = could not find container \"8940990b4eeab47177be3a76a9fc4894d28308e94e4c45050915ec740b778a41\": container with ID starting with 8940990b4eeab47177be3a76a9fc4894d28308e94e4c45050915ec740b778a41 not found: ID does not exist" Jan 22 11:51:37 crc kubenswrapper[5120]: E0122 11:51:37.411998 5120 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a52d1c0_c55c_47b4_936e_a783304a0e89.slice/crio-04e86588d8fba653a7e46769775e0363411492a2faa05c1b5793a39fc530062e.scope\": RecentStats: unable to find data in memory cache]" Jan 22 11:51:37 crc kubenswrapper[5120]: I0122 11:51:37.585124 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a14caf222afb62aaabdc47808b6f944" path="/var/lib/kubelet/pods/3a14caf222afb62aaabdc47808b6f944/volumes" Jan 22 11:51:40 crc kubenswrapper[5120]: E0122 11:51:40.854939 5120 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 22 11:51:40 crc kubenswrapper[5120]: E0122 11:51:40.856067 5120 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 22 11:51:40 crc kubenswrapper[5120]: E0122 11:51:40.856422 5120 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 22 11:51:40 crc kubenswrapper[5120]: E0122 11:51:40.856738 5120 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 22 11:51:40 crc kubenswrapper[5120]: E0122 11:51:40.857185 5120 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 22 11:51:40 crc kubenswrapper[5120]: I0122 11:51:40.857231 5120 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 22 11:51:40 crc kubenswrapper[5120]: E0122 11:51:40.857642 5120 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" interval="200ms" Jan 22 11:51:41 crc kubenswrapper[5120]: E0122 11:51:41.059427 5120 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" interval="400ms" Jan 22 11:51:41 crc kubenswrapper[5120]: E0122 11:51:41.460812 5120 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" interval="800ms" Jan 22 11:51:42 crc kubenswrapper[5120]: E0122 11:51:42.262098 5120 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" interval="1.6s" Jan 22 11:51:43 crc kubenswrapper[5120]: E0122 11:51:43.863085 5120 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" interval="3.2s" Jan 22 11:51:43 crc kubenswrapper[5120]: I0122 11:51:43.896569 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" podUID="bebd6777-9b90-4b62-a3a9-360290cb39a9" containerName="oauth-openshift" containerID="cri-o://1970871bfcc664e7bd0d7d614acf5222d8586ea1979edd4618dd7138b6e81a69" gracePeriod=15 Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.320390 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.321298 5120 status_manager.go:895] "Failed to get status for pod" podUID="bebd6777-9b90-4b62-a3a9-360290cb39a9" pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-25dsq\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.321810 5120 status_manager.go:895] "Failed to get status for pod" podUID="f30ae543-bf57-4bbc-9c40-25ceab4603c6" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.450162 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-router-certs\") pod \"bebd6777-9b90-4b62-a3a9-360290cb39a9\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.450271 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-user-idp-0-file-data\") pod \"bebd6777-9b90-4b62-a3a9-360290cb39a9\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.450356 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-cliconfig\") pod \"bebd6777-9b90-4b62-a3a9-360290cb39a9\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.450457 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-service-ca\") pod \"bebd6777-9b90-4b62-a3a9-360290cb39a9\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.450501 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bebd6777-9b90-4b62-a3a9-360290cb39a9-audit-dir\") pod \"bebd6777-9b90-4b62-a3a9-360290cb39a9\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.450581 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-ocp-branding-template\") pod \"bebd6777-9b90-4b62-a3a9-360290cb39a9\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.450647 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bebd6777-9b90-4b62-a3a9-360290cb39a9-audit-policies\") pod \"bebd6777-9b90-4b62-a3a9-360290cb39a9\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.450781 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-user-template-error\") pod \"bebd6777-9b90-4b62-a3a9-360290cb39a9\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.450763 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bebd6777-9b90-4b62-a3a9-360290cb39a9-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "bebd6777-9b90-4b62-a3a9-360290cb39a9" (UID: "bebd6777-9b90-4b62-a3a9-360290cb39a9"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.450817 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-session\") pod \"bebd6777-9b90-4b62-a3a9-360290cb39a9\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.451745 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bebd6777-9b90-4b62-a3a9-360290cb39a9-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "bebd6777-9b90-4b62-a3a9-360290cb39a9" (UID: "bebd6777-9b90-4b62-a3a9-360290cb39a9"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.451758 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "bebd6777-9b90-4b62-a3a9-360290cb39a9" (UID: "bebd6777-9b90-4b62-a3a9-360290cb39a9"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.451142 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-user-template-login\") pod \"bebd6777-9b90-4b62-a3a9-360290cb39a9\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.452318 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-trusted-ca-bundle\") pod \"bebd6777-9b90-4b62-a3a9-360290cb39a9\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.452314 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "bebd6777-9b90-4b62-a3a9-360290cb39a9" (UID: "bebd6777-9b90-4b62-a3a9-360290cb39a9"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.452382 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-serving-cert\") pod \"bebd6777-9b90-4b62-a3a9-360290cb39a9\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.452438 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgrjt\" (UniqueName: \"kubernetes.io/projected/bebd6777-9b90-4b62-a3a9-360290cb39a9-kube-api-access-dgrjt\") pod \"bebd6777-9b90-4b62-a3a9-360290cb39a9\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.452687 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-user-template-provider-selection\") pod \"bebd6777-9b90-4b62-a3a9-360290cb39a9\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.452850 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "bebd6777-9b90-4b62-a3a9-360290cb39a9" (UID: "bebd6777-9b90-4b62-a3a9-360290cb39a9"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.453259 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.453287 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.453784 5120 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bebd6777-9b90-4b62-a3a9-360290cb39a9-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.453807 5120 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bebd6777-9b90-4b62-a3a9-360290cb39a9-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.453827 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.461152 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "bebd6777-9b90-4b62-a3a9-360290cb39a9" (UID: "bebd6777-9b90-4b62-a3a9-360290cb39a9"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.462428 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "bebd6777-9b90-4b62-a3a9-360290cb39a9" (UID: "bebd6777-9b90-4b62-a3a9-360290cb39a9"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.462640 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "bebd6777-9b90-4b62-a3a9-360290cb39a9" (UID: "bebd6777-9b90-4b62-a3a9-360290cb39a9"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.462667 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bebd6777-9b90-4b62-a3a9-360290cb39a9-kube-api-access-dgrjt" (OuterVolumeSpecName: "kube-api-access-dgrjt") pod "bebd6777-9b90-4b62-a3a9-360290cb39a9" (UID: "bebd6777-9b90-4b62-a3a9-360290cb39a9"). InnerVolumeSpecName "kube-api-access-dgrjt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.464802 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "bebd6777-9b90-4b62-a3a9-360290cb39a9" (UID: "bebd6777-9b90-4b62-a3a9-360290cb39a9"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.465169 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "bebd6777-9b90-4b62-a3a9-360290cb39a9" (UID: "bebd6777-9b90-4b62-a3a9-360290cb39a9"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.465778 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "bebd6777-9b90-4b62-a3a9-360290cb39a9" (UID: "bebd6777-9b90-4b62-a3a9-360290cb39a9"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.466006 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "bebd6777-9b90-4b62-a3a9-360290cb39a9" (UID: "bebd6777-9b90-4b62-a3a9-360290cb39a9"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.472612 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "bebd6777-9b90-4b62-a3a9-360290cb39a9" (UID: "bebd6777-9b90-4b62-a3a9-360290cb39a9"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.555776 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.556253 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.556345 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.556415 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.556485 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dgrjt\" (UniqueName: \"kubernetes.io/projected/bebd6777-9b90-4b62-a3a9-360290cb39a9-kube-api-access-dgrjt\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.556589 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.556680 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.556750 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.556810 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:44 crc kubenswrapper[5120]: E0122 11:51:44.627642 5120 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.36:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188d0b565838ce2c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:51:34.101683756 +0000 UTC m=+228.845632137,LastTimestamp:2026-01-22 11:51:34.101683756 +0000 UTC m=+228.845632137,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.796090 5120 generic.go:358] "Generic (PLEG): container finished" podID="bebd6777-9b90-4b62-a3a9-360290cb39a9" containerID="1970871bfcc664e7bd0d7d614acf5222d8586ea1979edd4618dd7138b6e81a69" exitCode=0 Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.796226 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.796271 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" event={"ID":"bebd6777-9b90-4b62-a3a9-360290cb39a9","Type":"ContainerDied","Data":"1970871bfcc664e7bd0d7d614acf5222d8586ea1979edd4618dd7138b6e81a69"} Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.796346 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" event={"ID":"bebd6777-9b90-4b62-a3a9-360290cb39a9","Type":"ContainerDied","Data":"743767c75fc8dbe2e21f07b80773fcf606c65fb144c9e4f33a6d600d11d2e9c8"} Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.796401 5120 scope.go:117] "RemoveContainer" containerID="1970871bfcc664e7bd0d7d614acf5222d8586ea1979edd4618dd7138b6e81a69" Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.797513 5120 status_manager.go:895] "Failed to get status for pod" podUID="bebd6777-9b90-4b62-a3a9-360290cb39a9" pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-25dsq\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.798490 5120 status_manager.go:895] "Failed to get status for pod" podUID="f30ae543-bf57-4bbc-9c40-25ceab4603c6" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.821084 5120 status_manager.go:895] "Failed to get status for pod" podUID="bebd6777-9b90-4b62-a3a9-360290cb39a9" pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-25dsq\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.821847 5120 status_manager.go:895] "Failed to get status for pod" podUID="f30ae543-bf57-4bbc-9c40-25ceab4603c6" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.837039 5120 scope.go:117] "RemoveContainer" containerID="1970871bfcc664e7bd0d7d614acf5222d8586ea1979edd4618dd7138b6e81a69" Jan 22 11:51:44 crc kubenswrapper[5120]: E0122 11:51:44.837759 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1970871bfcc664e7bd0d7d614acf5222d8586ea1979edd4618dd7138b6e81a69\": container with ID starting with 1970871bfcc664e7bd0d7d614acf5222d8586ea1979edd4618dd7138b6e81a69 not found: ID does not exist" containerID="1970871bfcc664e7bd0d7d614acf5222d8586ea1979edd4618dd7138b6e81a69" Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.837851 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1970871bfcc664e7bd0d7d614acf5222d8586ea1979edd4618dd7138b6e81a69"} err="failed to get container status \"1970871bfcc664e7bd0d7d614acf5222d8586ea1979edd4618dd7138b6e81a69\": rpc error: code = NotFound desc = could not find container \"1970871bfcc664e7bd0d7d614acf5222d8586ea1979edd4618dd7138b6e81a69\": container with ID starting with 1970871bfcc664e7bd0d7d614acf5222d8586ea1979edd4618dd7138b6e81a69 not found: ID does not exist" Jan 22 11:51:45 crc kubenswrapper[5120]: I0122 11:51:45.579544 5120 status_manager.go:895] "Failed to get status for pod" podUID="bebd6777-9b90-4b62-a3a9-360290cb39a9" pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-25dsq\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 22 11:51:45 crc kubenswrapper[5120]: I0122 11:51:45.580616 5120 status_manager.go:895] "Failed to get status for pod" podUID="f30ae543-bf57-4bbc-9c40-25ceab4603c6" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 22 11:51:47 crc kubenswrapper[5120]: E0122 11:51:47.065193 5120 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" interval="6.4s" Jan 22 11:51:47 crc kubenswrapper[5120]: I0122 11:51:47.819506 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 22 11:51:47 crc kubenswrapper[5120]: I0122 11:51:47.819565 5120 generic.go:358] "Generic (PLEG): container finished" podID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerID="d8530587a7dacf7f1e414d966e228d915e25d07d268990a0cbd418ca534f37e7" exitCode=1 Jan 22 11:51:47 crc kubenswrapper[5120]: I0122 11:51:47.819744 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerDied","Data":"d8530587a7dacf7f1e414d966e228d915e25d07d268990a0cbd418ca534f37e7"} Jan 22 11:51:47 crc kubenswrapper[5120]: I0122 11:51:47.820793 5120 scope.go:117] "RemoveContainer" containerID="d8530587a7dacf7f1e414d966e228d915e25d07d268990a0cbd418ca534f37e7" Jan 22 11:51:47 crc kubenswrapper[5120]: I0122 11:51:47.821054 5120 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 22 11:51:47 crc kubenswrapper[5120]: I0122 11:51:47.821338 5120 status_manager.go:895] "Failed to get status for pod" podUID="bebd6777-9b90-4b62-a3a9-360290cb39a9" pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-25dsq\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 22 11:51:47 crc kubenswrapper[5120]: I0122 11:51:47.821677 5120 status_manager.go:895] "Failed to get status for pod" podUID="f30ae543-bf57-4bbc-9c40-25ceab4603c6" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 22 11:51:48 crc kubenswrapper[5120]: I0122 11:51:48.570840 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:51:48 crc kubenswrapper[5120]: I0122 11:51:48.572197 5120 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 22 11:51:48 crc kubenswrapper[5120]: I0122 11:51:48.572668 5120 status_manager.go:895] "Failed to get status for pod" podUID="bebd6777-9b90-4b62-a3a9-360290cb39a9" pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-25dsq\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 22 11:51:48 crc kubenswrapper[5120]: I0122 11:51:48.573053 5120 status_manager.go:895] "Failed to get status for pod" podUID="f30ae543-bf57-4bbc-9c40-25ceab4603c6" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 22 11:51:48 crc kubenswrapper[5120]: I0122 11:51:48.584052 5120 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="410ef417-8c38-4aac-9a75-c1a938b0cf8c" Jan 22 11:51:48 crc kubenswrapper[5120]: I0122 11:51:48.584098 5120 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="410ef417-8c38-4aac-9a75-c1a938b0cf8c" Jan 22 11:51:48 crc kubenswrapper[5120]: E0122 11:51:48.584626 5120 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:51:48 crc kubenswrapper[5120]: I0122 11:51:48.586062 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:51:48 crc kubenswrapper[5120]: W0122 11:51:48.606640 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57755cc5f99000cc11e193051474d4e2.slice/crio-cc561ee96b8a1542758bfcf01be3a85c24edc50a3487120817da20885acf41a3 WatchSource:0}: Error finding container cc561ee96b8a1542758bfcf01be3a85c24edc50a3487120817da20885acf41a3: Status 404 returned error can't find the container with id cc561ee96b8a1542758bfcf01be3a85c24edc50a3487120817da20885acf41a3 Jan 22 11:51:48 crc kubenswrapper[5120]: I0122 11:51:48.825581 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"cc561ee96b8a1542758bfcf01be3a85c24edc50a3487120817da20885acf41a3"} Jan 22 11:51:48 crc kubenswrapper[5120]: I0122 11:51:48.828329 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 22 11:51:48 crc kubenswrapper[5120]: I0122 11:51:48.828408 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"3ef6755048cc0fe7514752d596373386336135c1ba58aff51a2e461dc885948a"} Jan 22 11:51:48 crc kubenswrapper[5120]: I0122 11:51:48.829778 5120 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 22 11:51:48 crc kubenswrapper[5120]: I0122 11:51:48.829949 5120 status_manager.go:895] "Failed to get status for pod" podUID="bebd6777-9b90-4b62-a3a9-360290cb39a9" pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-25dsq\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 22 11:51:48 crc kubenswrapper[5120]: I0122 11:51:48.830135 5120 status_manager.go:895] "Failed to get status for pod" podUID="f30ae543-bf57-4bbc-9c40-25ceab4603c6" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 22 11:51:49 crc kubenswrapper[5120]: I0122 11:51:49.838341 5120 generic.go:358] "Generic (PLEG): container finished" podID="57755cc5f99000cc11e193051474d4e2" containerID="96218f764b310f89071e3f04e8558cb34a8b29869c9c379c60ba16ecec9042cd" exitCode=0 Jan 22 11:51:49 crc kubenswrapper[5120]: I0122 11:51:49.838447 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerDied","Data":"96218f764b310f89071e3f04e8558cb34a8b29869c9c379c60ba16ecec9042cd"} Jan 22 11:51:49 crc kubenswrapper[5120]: I0122 11:51:49.838687 5120 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="410ef417-8c38-4aac-9a75-c1a938b0cf8c" Jan 22 11:51:49 crc kubenswrapper[5120]: I0122 11:51:49.838829 5120 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="410ef417-8c38-4aac-9a75-c1a938b0cf8c" Jan 22 11:51:49 crc kubenswrapper[5120]: E0122 11:51:49.839195 5120 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:51:49 crc kubenswrapper[5120]: I0122 11:51:49.840607 5120 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 22 11:51:49 crc kubenswrapper[5120]: I0122 11:51:49.841511 5120 status_manager.go:895] "Failed to get status for pod" podUID="bebd6777-9b90-4b62-a3a9-360290cb39a9" pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-25dsq\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 22 11:51:49 crc kubenswrapper[5120]: I0122 11:51:49.841951 5120 status_manager.go:895] "Failed to get status for pod" podUID="f30ae543-bf57-4bbc-9c40-25ceab4603c6" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 22 11:51:50 crc kubenswrapper[5120]: I0122 11:51:50.847806 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"6ebad79612c4c7fa4d607ff9cce803f48601be83abb186a55e5c558549c3166b"} Jan 22 11:51:50 crc kubenswrapper[5120]: I0122 11:51:50.848190 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"e0f19d8e763a99e493269d5d958da4f8368d1fce51a1c596d8605b4bfd7f7f57"} Jan 22 11:51:50 crc kubenswrapper[5120]: I0122 11:51:50.848201 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"67488d9f7d77d47e57f5c6b52c8e91a033d7fa0e6d519d8082c5f2c87b11397f"} Jan 22 11:51:51 crc kubenswrapper[5120]: I0122 11:51:51.739281 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 11:51:51 crc kubenswrapper[5120]: I0122 11:51:51.751472 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 11:51:51 crc kubenswrapper[5120]: I0122 11:51:51.856689 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"d9a496b4bab60873fc28ca7402b37b731300f0df000a573ef929311e699429f4"} Jan 22 11:51:51 crc kubenswrapper[5120]: I0122 11:51:51.856762 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"71b962cc8ada4fe61e821258d5ea7098651ad03533bca91eeacc32f2d01336fe"} Jan 22 11:51:51 crc kubenswrapper[5120]: I0122 11:51:51.857044 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 11:51:51 crc kubenswrapper[5120]: I0122 11:51:51.857371 5120 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="410ef417-8c38-4aac-9a75-c1a938b0cf8c" Jan 22 11:51:51 crc kubenswrapper[5120]: I0122 11:51:51.857401 5120 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="410ef417-8c38-4aac-9a75-c1a938b0cf8c" Jan 22 11:51:53 crc kubenswrapper[5120]: I0122 11:51:53.586173 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:51:53 crc kubenswrapper[5120]: I0122 11:51:53.586234 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:51:53 crc kubenswrapper[5120]: I0122 11:51:53.593500 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:51:56 crc kubenswrapper[5120]: I0122 11:51:56.872477 5120 kubelet.go:3329] "Deleted mirror pod as it didn't match the static Pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:51:56 crc kubenswrapper[5120]: I0122 11:51:56.873013 5120 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:51:56 crc kubenswrapper[5120]: I0122 11:51:56.939080 5120 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="804b3ebe-5124-4e95-baf7-1b1e38ed753c" Jan 22 11:51:57 crc kubenswrapper[5120]: I0122 11:51:57.892754 5120 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="410ef417-8c38-4aac-9a75-c1a938b0cf8c" Jan 22 11:51:57 crc kubenswrapper[5120]: I0122 11:51:57.893266 5120 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="410ef417-8c38-4aac-9a75-c1a938b0cf8c" Jan 22 11:51:57 crc kubenswrapper[5120]: I0122 11:51:57.892752 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:51:57 crc kubenswrapper[5120]: I0122 11:51:57.897820 5120 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="804b3ebe-5124-4e95-baf7-1b1e38ed753c" Jan 22 11:51:57 crc kubenswrapper[5120]: I0122 11:51:57.899252 5120 status_manager.go:346] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://67488d9f7d77d47e57f5c6b52c8e91a033d7fa0e6d519d8082c5f2c87b11397f" Jan 22 11:51:57 crc kubenswrapper[5120]: I0122 11:51:57.899275 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:51:58 crc kubenswrapper[5120]: I0122 11:51:58.896817 5120 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="410ef417-8c38-4aac-9a75-c1a938b0cf8c" Jan 22 11:51:58 crc kubenswrapper[5120]: I0122 11:51:58.896849 5120 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="410ef417-8c38-4aac-9a75-c1a938b0cf8c" Jan 22 11:51:58 crc kubenswrapper[5120]: I0122 11:51:58.900754 5120 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="804b3ebe-5124-4e95-baf7-1b1e38ed753c" Jan 22 11:52:01 crc kubenswrapper[5120]: I0122 11:52:01.972814 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:52:01 crc kubenswrapper[5120]: I0122 11:52:01.973359 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:52:02 crc kubenswrapper[5120]: I0122 11:52:02.872688 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 11:52:07 crc kubenswrapper[5120]: I0122 11:52:07.353894 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Jan 22 11:52:07 crc kubenswrapper[5120]: I0122 11:52:07.450925 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Jan 22 11:52:07 crc kubenswrapper[5120]: I0122 11:52:07.481641 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Jan 22 11:52:07 crc kubenswrapper[5120]: I0122 11:52:07.781699 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Jan 22 11:52:08 crc kubenswrapper[5120]: I0122 11:52:08.214647 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Jan 22 11:52:08 crc kubenswrapper[5120]: I0122 11:52:08.243801 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Jan 22 11:52:08 crc kubenswrapper[5120]: I0122 11:52:08.467089 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Jan 22 11:52:08 crc kubenswrapper[5120]: I0122 11:52:08.560288 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Jan 22 11:52:08 crc kubenswrapper[5120]: I0122 11:52:08.596675 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Jan 22 11:52:08 crc kubenswrapper[5120]: I0122 11:52:08.712169 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Jan 22 11:52:08 crc kubenswrapper[5120]: I0122 11:52:08.805206 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Jan 22 11:52:08 crc kubenswrapper[5120]: I0122 11:52:08.896738 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Jan 22 11:52:09 crc kubenswrapper[5120]: I0122 11:52:09.219017 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Jan 22 11:52:09 crc kubenswrapper[5120]: I0122 11:52:09.223301 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Jan 22 11:52:09 crc kubenswrapper[5120]: I0122 11:52:09.584900 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Jan 22 11:52:09 crc kubenswrapper[5120]: I0122 11:52:09.602244 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Jan 22 11:52:09 crc kubenswrapper[5120]: I0122 11:52:09.723752 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Jan 22 11:52:09 crc kubenswrapper[5120]: I0122 11:52:09.768126 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Jan 22 11:52:09 crc kubenswrapper[5120]: I0122 11:52:09.828065 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Jan 22 11:52:09 crc kubenswrapper[5120]: I0122 11:52:09.868916 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Jan 22 11:52:10 crc kubenswrapper[5120]: I0122 11:52:10.088315 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Jan 22 11:52:10 crc kubenswrapper[5120]: I0122 11:52:10.184409 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Jan 22 11:52:10 crc kubenswrapper[5120]: I0122 11:52:10.285119 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Jan 22 11:52:10 crc kubenswrapper[5120]: I0122 11:52:10.324697 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Jan 22 11:52:10 crc kubenswrapper[5120]: I0122 11:52:10.357648 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Jan 22 11:52:10 crc kubenswrapper[5120]: I0122 11:52:10.376000 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Jan 22 11:52:10 crc kubenswrapper[5120]: I0122 11:52:10.376204 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Jan 22 11:52:10 crc kubenswrapper[5120]: I0122 11:52:10.391226 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Jan 22 11:52:10 crc kubenswrapper[5120]: I0122 11:52:10.443169 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Jan 22 11:52:10 crc kubenswrapper[5120]: I0122 11:52:10.451238 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Jan 22 11:52:10 crc kubenswrapper[5120]: I0122 11:52:10.524202 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Jan 22 11:52:10 crc kubenswrapper[5120]: I0122 11:52:10.630832 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Jan 22 11:52:10 crc kubenswrapper[5120]: I0122 11:52:10.673713 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Jan 22 11:52:10 crc kubenswrapper[5120]: I0122 11:52:10.778669 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Jan 22 11:52:10 crc kubenswrapper[5120]: I0122 11:52:10.813126 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Jan 22 11:52:10 crc kubenswrapper[5120]: I0122 11:52:10.870439 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Jan 22 11:52:10 crc kubenswrapper[5120]: I0122 11:52:10.873356 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Jan 22 11:52:10 crc kubenswrapper[5120]: I0122 11:52:10.941102 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Jan 22 11:52:10 crc kubenswrapper[5120]: I0122 11:52:10.943502 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Jan 22 11:52:10 crc kubenswrapper[5120]: I0122 11:52:10.982587 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Jan 22 11:52:10 crc kubenswrapper[5120]: I0122 11:52:10.989279 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Jan 22 11:52:10 crc kubenswrapper[5120]: I0122 11:52:10.995437 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Jan 22 11:52:11 crc kubenswrapper[5120]: I0122 11:52:11.091551 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Jan 22 11:52:11 crc kubenswrapper[5120]: I0122 11:52:11.105343 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Jan 22 11:52:11 crc kubenswrapper[5120]: I0122 11:52:11.126665 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Jan 22 11:52:11 crc kubenswrapper[5120]: I0122 11:52:11.128202 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Jan 22 11:52:11 crc kubenswrapper[5120]: I0122 11:52:11.335226 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Jan 22 11:52:11 crc kubenswrapper[5120]: I0122 11:52:11.408005 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Jan 22 11:52:11 crc kubenswrapper[5120]: I0122 11:52:11.478132 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Jan 22 11:52:11 crc kubenswrapper[5120]: I0122 11:52:11.514760 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Jan 22 11:52:11 crc kubenswrapper[5120]: I0122 11:52:11.667332 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Jan 22 11:52:11 crc kubenswrapper[5120]: I0122 11:52:11.668010 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Jan 22 11:52:11 crc kubenswrapper[5120]: I0122 11:52:11.724608 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 22 11:52:11 crc kubenswrapper[5120]: I0122 11:52:11.729836 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Jan 22 11:52:11 crc kubenswrapper[5120]: I0122 11:52:11.739543 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Jan 22 11:52:11 crc kubenswrapper[5120]: I0122 11:52:11.822821 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Jan 22 11:52:11 crc kubenswrapper[5120]: I0122 11:52:11.911616 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Jan 22 11:52:11 crc kubenswrapper[5120]: I0122 11:52:11.934896 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Jan 22 11:52:12 crc kubenswrapper[5120]: I0122 11:52:12.077455 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Jan 22 11:52:12 crc kubenswrapper[5120]: I0122 11:52:12.124535 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Jan 22 11:52:12 crc kubenswrapper[5120]: I0122 11:52:12.145031 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Jan 22 11:52:12 crc kubenswrapper[5120]: I0122 11:52:12.154790 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Jan 22 11:52:12 crc kubenswrapper[5120]: I0122 11:52:12.174850 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Jan 22 11:52:12 crc kubenswrapper[5120]: I0122 11:52:12.229455 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Jan 22 11:52:12 crc kubenswrapper[5120]: I0122 11:52:12.285106 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Jan 22 11:52:12 crc kubenswrapper[5120]: I0122 11:52:12.396853 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Jan 22 11:52:12 crc kubenswrapper[5120]: I0122 11:52:12.492447 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Jan 22 11:52:12 crc kubenswrapper[5120]: I0122 11:52:12.535167 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Jan 22 11:52:12 crc kubenswrapper[5120]: I0122 11:52:12.559535 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Jan 22 11:52:12 crc kubenswrapper[5120]: I0122 11:52:12.613611 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Jan 22 11:52:12 crc kubenswrapper[5120]: I0122 11:52:12.732560 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Jan 22 11:52:12 crc kubenswrapper[5120]: I0122 11:52:12.814042 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Jan 22 11:52:12 crc kubenswrapper[5120]: I0122 11:52:12.939619 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Jan 22 11:52:13 crc kubenswrapper[5120]: I0122 11:52:13.091068 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Jan 22 11:52:13 crc kubenswrapper[5120]: I0122 11:52:13.185887 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Jan 22 11:52:13 crc kubenswrapper[5120]: I0122 11:52:13.222944 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Jan 22 11:52:13 crc kubenswrapper[5120]: I0122 11:52:13.239876 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Jan 22 11:52:13 crc kubenswrapper[5120]: I0122 11:52:13.248724 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Jan 22 11:52:13 crc kubenswrapper[5120]: I0122 11:52:13.270595 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Jan 22 11:52:13 crc kubenswrapper[5120]: I0122 11:52:13.278503 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Jan 22 11:52:13 crc kubenswrapper[5120]: I0122 11:52:13.324274 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Jan 22 11:52:13 crc kubenswrapper[5120]: I0122 11:52:13.366983 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Jan 22 11:52:13 crc kubenswrapper[5120]: I0122 11:52:13.496470 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Jan 22 11:52:13 crc kubenswrapper[5120]: I0122 11:52:13.564138 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Jan 22 11:52:13 crc kubenswrapper[5120]: I0122 11:52:13.579839 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Jan 22 11:52:13 crc kubenswrapper[5120]: I0122 11:52:13.599765 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Jan 22 11:52:13 crc kubenswrapper[5120]: I0122 11:52:13.608240 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Jan 22 11:52:13 crc kubenswrapper[5120]: I0122 11:52:13.684784 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Jan 22 11:52:13 crc kubenswrapper[5120]: I0122 11:52:13.733345 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Jan 22 11:52:13 crc kubenswrapper[5120]: I0122 11:52:13.793831 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Jan 22 11:52:13 crc kubenswrapper[5120]: I0122 11:52:13.819000 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Jan 22 11:52:13 crc kubenswrapper[5120]: I0122 11:52:13.859269 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Jan 22 11:52:13 crc kubenswrapper[5120]: I0122 11:52:13.922290 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Jan 22 11:52:13 crc kubenswrapper[5120]: I0122 11:52:13.995685 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Jan 22 11:52:14 crc kubenswrapper[5120]: I0122 11:52:14.006681 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Jan 22 11:52:14 crc kubenswrapper[5120]: I0122 11:52:14.122548 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Jan 22 11:52:14 crc kubenswrapper[5120]: I0122 11:52:14.133654 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Jan 22 11:52:14 crc kubenswrapper[5120]: I0122 11:52:14.179758 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Jan 22 11:52:14 crc kubenswrapper[5120]: I0122 11:52:14.181132 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Jan 22 11:52:14 crc kubenswrapper[5120]: I0122 11:52:14.269098 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Jan 22 11:52:14 crc kubenswrapper[5120]: I0122 11:52:14.276406 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Jan 22 11:52:14 crc kubenswrapper[5120]: I0122 11:52:14.439893 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Jan 22 11:52:14 crc kubenswrapper[5120]: I0122 11:52:14.483627 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Jan 22 11:52:14 crc kubenswrapper[5120]: I0122 11:52:14.541583 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Jan 22 11:52:14 crc kubenswrapper[5120]: I0122 11:52:14.614743 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Jan 22 11:52:14 crc kubenswrapper[5120]: I0122 11:52:14.671353 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Jan 22 11:52:14 crc kubenswrapper[5120]: I0122 11:52:14.697059 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Jan 22 11:52:14 crc kubenswrapper[5120]: I0122 11:52:14.766511 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Jan 22 11:52:14 crc kubenswrapper[5120]: I0122 11:52:14.801951 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Jan 22 11:52:14 crc kubenswrapper[5120]: I0122 11:52:14.839836 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Jan 22 11:52:14 crc kubenswrapper[5120]: I0122 11:52:14.864842 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Jan 22 11:52:14 crc kubenswrapper[5120]: I0122 11:52:14.899029 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Jan 22 11:52:14 crc kubenswrapper[5120]: I0122 11:52:14.916899 5120 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Jan 22 11:52:14 crc kubenswrapper[5120]: I0122 11:52:14.922903 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-25dsq","openshift-kube-apiserver/kube-apiserver-crc"] Jan 22 11:52:14 crc kubenswrapper[5120]: I0122 11:52:14.923022 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 22 11:52:14 crc kubenswrapper[5120]: I0122 11:52:14.931364 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:52:14 crc kubenswrapper[5120]: I0122 11:52:14.950872 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Jan 22 11:52:14 crc kubenswrapper[5120]: I0122 11:52:14.955250 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=18.955227615 podStartE2EDuration="18.955227615s" podCreationTimestamp="2026-01-22 11:51:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:52:14.951169534 +0000 UTC m=+269.695117885" watchObservedRunningTime="2026-01-22 11:52:14.955227615 +0000 UTC m=+269.699176006" Jan 22 11:52:14 crc kubenswrapper[5120]: I0122 11:52:14.965383 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Jan 22 11:52:14 crc kubenswrapper[5120]: I0122 11:52:14.980924 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.028633 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.137366 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.160172 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-859f9fbf8c-djk86"] Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.161293 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bebd6777-9b90-4b62-a3a9-360290cb39a9" containerName="oauth-openshift" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.161330 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="bebd6777-9b90-4b62-a3a9-360290cb39a9" containerName="oauth-openshift" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.161361 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f30ae543-bf57-4bbc-9c40-25ceab4603c6" containerName="installer" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.161374 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="f30ae543-bf57-4bbc-9c40-25ceab4603c6" containerName="installer" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.161645 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="f30ae543-bf57-4bbc-9c40-25ceab4603c6" containerName="installer" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.161671 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="bebd6777-9b90-4b62-a3a9-360290cb39a9" containerName="oauth-openshift" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.186872 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.190091 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.190421 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.193397 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.193899 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.195047 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.197372 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.197671 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.200979 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.201041 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.201289 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.201535 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.201612 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.201873 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.205260 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.211411 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.270860 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.300307 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/964936ed-c6ba-45f2-9ccd-871c228a1383-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.300769 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/964936ed-c6ba-45f2-9ccd-871c228a1383-v4-0-config-system-service-ca\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.301041 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/964936ed-c6ba-45f2-9ccd-871c228a1383-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.301225 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/964936ed-c6ba-45f2-9ccd-871c228a1383-v4-0-config-user-template-error\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.301390 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/964936ed-c6ba-45f2-9ccd-871c228a1383-v4-0-config-system-router-certs\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.301543 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/964936ed-c6ba-45f2-9ccd-871c228a1383-v4-0-config-system-cliconfig\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.301709 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/964936ed-c6ba-45f2-9ccd-871c228a1383-v4-0-config-system-session\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.301901 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/964936ed-c6ba-45f2-9ccd-871c228a1383-audit-dir\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.302093 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/964936ed-c6ba-45f2-9ccd-871c228a1383-audit-policies\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.302244 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/964936ed-c6ba-45f2-9ccd-871c228a1383-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.302406 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjgfn\" (UniqueName: \"kubernetes.io/projected/964936ed-c6ba-45f2-9ccd-871c228a1383-kube-api-access-jjgfn\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.302555 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/964936ed-c6ba-45f2-9ccd-871c228a1383-v4-0-config-user-template-login\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.302700 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/964936ed-c6ba-45f2-9ccd-871c228a1383-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.302842 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/964936ed-c6ba-45f2-9ccd-871c228a1383-v4-0-config-system-serving-cert\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.356472 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.376141 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.404579 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/964936ed-c6ba-45f2-9ccd-871c228a1383-v4-0-config-system-session\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.405116 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/964936ed-c6ba-45f2-9ccd-871c228a1383-audit-dir\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.405233 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/964936ed-c6ba-45f2-9ccd-871c228a1383-audit-policies\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.405273 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/964936ed-c6ba-45f2-9ccd-871c228a1383-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.405323 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/964936ed-c6ba-45f2-9ccd-871c228a1383-audit-dir\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.405413 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jjgfn\" (UniqueName: \"kubernetes.io/projected/964936ed-c6ba-45f2-9ccd-871c228a1383-kube-api-access-jjgfn\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.405476 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/964936ed-c6ba-45f2-9ccd-871c228a1383-v4-0-config-user-template-login\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.405556 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/964936ed-c6ba-45f2-9ccd-871c228a1383-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.405625 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/964936ed-c6ba-45f2-9ccd-871c228a1383-v4-0-config-system-serving-cert\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.406186 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/964936ed-c6ba-45f2-9ccd-871c228a1383-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.406335 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/964936ed-c6ba-45f2-9ccd-871c228a1383-v4-0-config-system-service-ca\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.406432 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/964936ed-c6ba-45f2-9ccd-871c228a1383-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.406495 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/964936ed-c6ba-45f2-9ccd-871c228a1383-v4-0-config-user-template-error\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.406587 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/964936ed-c6ba-45f2-9ccd-871c228a1383-v4-0-config-system-router-certs\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.406644 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/964936ed-c6ba-45f2-9ccd-871c228a1383-v4-0-config-system-cliconfig\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.406845 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/964936ed-c6ba-45f2-9ccd-871c228a1383-audit-policies\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.406862 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/964936ed-c6ba-45f2-9ccd-871c228a1383-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.407660 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/964936ed-c6ba-45f2-9ccd-871c228a1383-v4-0-config-system-cliconfig\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.408174 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/964936ed-c6ba-45f2-9ccd-871c228a1383-v4-0-config-system-service-ca\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.415184 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/964936ed-c6ba-45f2-9ccd-871c228a1383-v4-0-config-user-template-login\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.415220 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/964936ed-c6ba-45f2-9ccd-871c228a1383-v4-0-config-system-router-certs\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.415873 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/964936ed-c6ba-45f2-9ccd-871c228a1383-v4-0-config-system-session\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.417511 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/964936ed-c6ba-45f2-9ccd-871c228a1383-v4-0-config-system-serving-cert\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.417795 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/964936ed-c6ba-45f2-9ccd-871c228a1383-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.420557 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/964936ed-c6ba-45f2-9ccd-871c228a1383-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.421676 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/964936ed-c6ba-45f2-9ccd-871c228a1383-v4-0-config-user-template-error\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.427377 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/964936ed-c6ba-45f2-9ccd-871c228a1383-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.438736 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjgfn\" (UniqueName: \"kubernetes.io/projected/964936ed-c6ba-45f2-9ccd-871c228a1383-kube-api-access-jjgfn\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.442851 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.502381 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.564734 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.586895 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.596532 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bebd6777-9b90-4b62-a3a9-360290cb39a9" path="/var/lib/kubelet/pods/bebd6777-9b90-4b62-a3a9-360290cb39a9/volumes" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.623643 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.634876 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.676538 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.723811 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.735079 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.839029 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.978129 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Jan 22 11:52:16 crc kubenswrapper[5120]: I0122 11:52:16.097408 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Jan 22 11:52:16 crc kubenswrapper[5120]: I0122 11:52:16.188176 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Jan 22 11:52:16 crc kubenswrapper[5120]: I0122 11:52:16.376699 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Jan 22 11:52:16 crc kubenswrapper[5120]: I0122 11:52:16.414211 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Jan 22 11:52:16 crc kubenswrapper[5120]: I0122 11:52:16.495720 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Jan 22 11:52:16 crc kubenswrapper[5120]: I0122 11:52:16.517503 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Jan 22 11:52:16 crc kubenswrapper[5120]: I0122 11:52:16.527772 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Jan 22 11:52:16 crc kubenswrapper[5120]: I0122 11:52:16.564262 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Jan 22 11:52:16 crc kubenswrapper[5120]: I0122 11:52:16.572430 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Jan 22 11:52:16 crc kubenswrapper[5120]: I0122 11:52:16.650579 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 22 11:52:16 crc kubenswrapper[5120]: I0122 11:52:16.707618 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Jan 22 11:52:16 crc kubenswrapper[5120]: I0122 11:52:16.840236 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Jan 22 11:52:16 crc kubenswrapper[5120]: I0122 11:52:16.872877 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Jan 22 11:52:16 crc kubenswrapper[5120]: I0122 11:52:16.886916 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Jan 22 11:52:16 crc kubenswrapper[5120]: I0122 11:52:16.938569 5120 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Jan 22 11:52:16 crc kubenswrapper[5120]: I0122 11:52:16.979615 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Jan 22 11:52:17 crc kubenswrapper[5120]: I0122 11:52:17.003487 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Jan 22 11:52:17 crc kubenswrapper[5120]: I0122 11:52:17.046470 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Jan 22 11:52:17 crc kubenswrapper[5120]: I0122 11:52:17.119888 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Jan 22 11:52:17 crc kubenswrapper[5120]: I0122 11:52:17.152570 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Jan 22 11:52:17 crc kubenswrapper[5120]: I0122 11:52:17.153770 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Jan 22 11:52:17 crc kubenswrapper[5120]: I0122 11:52:17.251290 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Jan 22 11:52:17 crc kubenswrapper[5120]: I0122 11:52:17.350895 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Jan 22 11:52:17 crc kubenswrapper[5120]: I0122 11:52:17.375155 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Jan 22 11:52:17 crc kubenswrapper[5120]: I0122 11:52:17.381203 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Jan 22 11:52:17 crc kubenswrapper[5120]: I0122 11:52:17.401468 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Jan 22 11:52:17 crc kubenswrapper[5120]: I0122 11:52:17.459414 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Jan 22 11:52:17 crc kubenswrapper[5120]: I0122 11:52:17.519820 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Jan 22 11:52:17 crc kubenswrapper[5120]: I0122 11:52:17.529034 5120 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Jan 22 11:52:17 crc kubenswrapper[5120]: I0122 11:52:17.703323 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Jan 22 11:52:17 crc kubenswrapper[5120]: I0122 11:52:17.763355 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Jan 22 11:52:17 crc kubenswrapper[5120]: I0122 11:52:17.838146 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Jan 22 11:52:17 crc kubenswrapper[5120]: I0122 11:52:17.870438 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Jan 22 11:52:17 crc kubenswrapper[5120]: I0122 11:52:17.948903 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Jan 22 11:52:17 crc kubenswrapper[5120]: I0122 11:52:17.949435 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Jan 22 11:52:18 crc kubenswrapper[5120]: I0122 11:52:18.106444 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Jan 22 11:52:18 crc kubenswrapper[5120]: I0122 11:52:18.179163 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Jan 22 11:52:18 crc kubenswrapper[5120]: I0122 11:52:18.202328 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Jan 22 11:52:18 crc kubenswrapper[5120]: I0122 11:52:18.254716 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Jan 22 11:52:18 crc kubenswrapper[5120]: I0122 11:52:18.285914 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Jan 22 11:52:18 crc kubenswrapper[5120]: I0122 11:52:18.287460 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Jan 22 11:52:18 crc kubenswrapper[5120]: I0122 11:52:18.542482 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Jan 22 11:52:18 crc kubenswrapper[5120]: I0122 11:52:18.543761 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Jan 22 11:52:18 crc kubenswrapper[5120]: I0122 11:52:18.602683 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Jan 22 11:52:18 crc kubenswrapper[5120]: I0122 11:52:18.732317 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Jan 22 11:52:18 crc kubenswrapper[5120]: I0122 11:52:18.772262 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Jan 22 11:52:18 crc kubenswrapper[5120]: I0122 11:52:18.789348 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Jan 22 11:52:18 crc kubenswrapper[5120]: I0122 11:52:18.831841 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Jan 22 11:52:18 crc kubenswrapper[5120]: I0122 11:52:18.832077 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Jan 22 11:52:18 crc kubenswrapper[5120]: I0122 11:52:18.914084 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Jan 22 11:52:18 crc kubenswrapper[5120]: I0122 11:52:18.965395 5120 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Jan 22 11:52:19 crc kubenswrapper[5120]: I0122 11:52:19.171364 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Jan 22 11:52:19 crc kubenswrapper[5120]: I0122 11:52:19.407818 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Jan 22 11:52:19 crc kubenswrapper[5120]: I0122 11:52:19.546386 5120 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 22 11:52:19 crc kubenswrapper[5120]: I0122 11:52:19.546839 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" containerID="cri-o://a8463e343cc5ae2c432dc371c37cafeb5cfd870e6bf3b62821dbcd1658194ee4" gracePeriod=5 Jan 22 11:52:19 crc kubenswrapper[5120]: I0122 11:52:19.589305 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Jan 22 11:52:19 crc kubenswrapper[5120]: I0122 11:52:19.743312 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Jan 22 11:52:19 crc kubenswrapper[5120]: I0122 11:52:19.773452 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Jan 22 11:52:19 crc kubenswrapper[5120]: I0122 11:52:19.835829 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Jan 22 11:52:19 crc kubenswrapper[5120]: I0122 11:52:19.843550 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Jan 22 11:52:19 crc kubenswrapper[5120]: I0122 11:52:19.853876 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Jan 22 11:52:19 crc kubenswrapper[5120]: I0122 11:52:19.861872 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Jan 22 11:52:19 crc kubenswrapper[5120]: I0122 11:52:19.952844 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Jan 22 11:52:19 crc kubenswrapper[5120]: I0122 11:52:19.956406 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Jan 22 11:52:20 crc kubenswrapper[5120]: I0122 11:52:20.128107 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Jan 22 11:52:20 crc kubenswrapper[5120]: I0122 11:52:20.249711 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Jan 22 11:52:20 crc kubenswrapper[5120]: I0122 11:52:20.253946 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Jan 22 11:52:20 crc kubenswrapper[5120]: I0122 11:52:20.480820 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Jan 22 11:52:20 crc kubenswrapper[5120]: I0122 11:52:20.499529 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Jan 22 11:52:20 crc kubenswrapper[5120]: I0122 11:52:20.594213 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Jan 22 11:52:20 crc kubenswrapper[5120]: I0122 11:52:20.973747 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Jan 22 11:52:20 crc kubenswrapper[5120]: I0122 11:52:20.994856 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Jan 22 11:52:21 crc kubenswrapper[5120]: I0122 11:52:21.024413 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Jan 22 11:52:21 crc kubenswrapper[5120]: I0122 11:52:21.030914 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Jan 22 11:52:21 crc kubenswrapper[5120]: I0122 11:52:21.074743 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Jan 22 11:52:21 crc kubenswrapper[5120]: I0122 11:52:21.079876 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Jan 22 11:52:21 crc kubenswrapper[5120]: I0122 11:52:21.178889 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Jan 22 11:52:21 crc kubenswrapper[5120]: I0122 11:52:21.182620 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Jan 22 11:52:21 crc kubenswrapper[5120]: I0122 11:52:21.268436 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Jan 22 11:52:21 crc kubenswrapper[5120]: I0122 11:52:21.487192 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Jan 22 11:52:21 crc kubenswrapper[5120]: I0122 11:52:21.532016 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Jan 22 11:52:21 crc kubenswrapper[5120]: I0122 11:52:21.534863 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Jan 22 11:52:21 crc kubenswrapper[5120]: I0122 11:52:21.561768 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Jan 22 11:52:21 crc kubenswrapper[5120]: I0122 11:52:21.571398 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Jan 22 11:52:21 crc kubenswrapper[5120]: I0122 11:52:21.811640 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Jan 22 11:52:21 crc kubenswrapper[5120]: I0122 11:52:21.913068 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Jan 22 11:52:21 crc kubenswrapper[5120]: I0122 11:52:21.955555 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Jan 22 11:52:21 crc kubenswrapper[5120]: I0122 11:52:21.971905 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Jan 22 11:52:22 crc kubenswrapper[5120]: I0122 11:52:22.001270 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Jan 22 11:52:22 crc kubenswrapper[5120]: I0122 11:52:22.029494 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Jan 22 11:52:22 crc kubenswrapper[5120]: I0122 11:52:22.149093 5120 ???:1] "http: TLS handshake error from 192.168.126.11:35886: no serving certificate available for the kubelet" Jan 22 11:52:22 crc kubenswrapper[5120]: I0122 11:52:22.192581 5120 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Jan 22 11:52:22 crc kubenswrapper[5120]: I0122 11:52:22.337633 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 22 11:52:22 crc kubenswrapper[5120]: I0122 11:52:22.384636 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Jan 22 11:52:22 crc kubenswrapper[5120]: I0122 11:52:22.432076 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Jan 22 11:52:23 crc kubenswrapper[5120]: I0122 11:52:23.008027 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Jan 22 11:52:23 crc kubenswrapper[5120]: I0122 11:52:23.423059 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Jan 22 11:52:23 crc kubenswrapper[5120]: I0122 11:52:23.462668 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Jan 22 11:52:25 crc kubenswrapper[5120]: I0122 11:52:25.067882 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Jan 22 11:52:25 crc kubenswrapper[5120]: I0122 11:52:25.068266 5120 generic.go:358] "Generic (PLEG): container finished" podID="f7dbc7e1ee9c187a863ef9b473fad27b" containerID="a8463e343cc5ae2c432dc371c37cafeb5cfd870e6bf3b62821dbcd1658194ee4" exitCode=137 Jan 22 11:52:25 crc kubenswrapper[5120]: I0122 11:52:25.128567 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Jan 22 11:52:25 crc kubenswrapper[5120]: I0122 11:52:25.128738 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 11:52:25 crc kubenswrapper[5120]: I0122 11:52:25.130910 5120 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Jan 22 11:52:25 crc kubenswrapper[5120]: I0122 11:52:25.153841 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 22 11:52:25 crc kubenswrapper[5120]: I0122 11:52:25.153913 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 22 11:52:25 crc kubenswrapper[5120]: I0122 11:52:25.154150 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 22 11:52:25 crc kubenswrapper[5120]: I0122 11:52:25.154138 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log" (OuterVolumeSpecName: "var-log") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 11:52:25 crc kubenswrapper[5120]: I0122 11:52:25.154179 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 22 11:52:25 crc kubenswrapper[5120]: I0122 11:52:25.154254 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 11:52:25 crc kubenswrapper[5120]: I0122 11:52:25.154279 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 22 11:52:25 crc kubenswrapper[5120]: I0122 11:52:25.154308 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests" (OuterVolumeSpecName: "manifests") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 11:52:25 crc kubenswrapper[5120]: I0122 11:52:25.154373 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock" (OuterVolumeSpecName: "var-lock") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 11:52:25 crc kubenswrapper[5120]: I0122 11:52:25.155010 5120 reconciler_common.go:299] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") on node \"crc\" DevicePath \"\"" Jan 22 11:52:25 crc kubenswrapper[5120]: I0122 11:52:25.155044 5120 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 22 11:52:25 crc kubenswrapper[5120]: I0122 11:52:25.155063 5120 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") on node \"crc\" DevicePath \"\"" Jan 22 11:52:25 crc kubenswrapper[5120]: I0122 11:52:25.155075 5120 reconciler_common.go:299] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") on node \"crc\" DevicePath \"\"" Jan 22 11:52:25 crc kubenswrapper[5120]: I0122 11:52:25.167535 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 11:52:25 crc kubenswrapper[5120]: I0122 11:52:25.256326 5120 reconciler_common.go:299] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 22 11:52:25 crc kubenswrapper[5120]: I0122 11:52:25.579532 5120 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Jan 22 11:52:25 crc kubenswrapper[5120]: I0122 11:52:25.582597 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" path="/var/lib/kubelet/pods/f7dbc7e1ee9c187a863ef9b473fad27b/volumes" Jan 22 11:52:26 crc kubenswrapper[5120]: I0122 11:52:26.075183 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Jan 22 11:52:26 crc kubenswrapper[5120]: I0122 11:52:26.075379 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 11:52:26 crc kubenswrapper[5120]: I0122 11:52:26.075399 5120 scope.go:117] "RemoveContainer" containerID="a8463e343cc5ae2c432dc371c37cafeb5cfd870e6bf3b62821dbcd1658194ee4" Jan 22 11:52:26 crc kubenswrapper[5120]: I0122 11:52:26.076894 5120 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Jan 22 11:52:26 crc kubenswrapper[5120]: I0122 11:52:26.082050 5120 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Jan 22 11:52:31 crc kubenswrapper[5120]: I0122 11:52:31.496079 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Jan 22 11:52:31 crc kubenswrapper[5120]: I0122 11:52:31.972946 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:52:31 crc kubenswrapper[5120]: I0122 11:52:31.973059 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:52:31 crc kubenswrapper[5120]: I0122 11:52:31.973113 5120 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dq269" Jan 22 11:52:31 crc kubenswrapper[5120]: I0122 11:52:31.973774 5120 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"850c532d98a8bbc54351ca3b791b2314fd23331e43f96e8f0161ba791781ae24"} pod="openshift-machine-config-operator/machine-config-daemon-dq269" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 11:52:31 crc kubenswrapper[5120]: I0122 11:52:31.973836 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" containerID="cri-o://850c532d98a8bbc54351ca3b791b2314fd23331e43f96e8f0161ba791781ae24" gracePeriod=600 Jan 22 11:52:33 crc kubenswrapper[5120]: I0122 11:52:33.130572 5120 generic.go:358] "Generic (PLEG): container finished" podID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerID="850c532d98a8bbc54351ca3b791b2314fd23331e43f96e8f0161ba791781ae24" exitCode=0 Jan 22 11:52:33 crc kubenswrapper[5120]: I0122 11:52:33.130647 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" event={"ID":"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9","Type":"ContainerDied","Data":"850c532d98a8bbc54351ca3b791b2314fd23331e43f96e8f0161ba791781ae24"} Jan 22 11:52:33 crc kubenswrapper[5120]: I0122 11:52:33.131473 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" event={"ID":"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9","Type":"ContainerStarted","Data":"e857eb1297fb678314f51a1be1533aaadb53a0e5183e6c42cc64ea1b07667a10"} Jan 22 11:52:36 crc kubenswrapper[5120]: I0122 11:52:36.808799 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 22 11:52:39 crc kubenswrapper[5120]: I0122 11:52:39.167798 5120 generic.go:358] "Generic (PLEG): container finished" podID="17d1692e-e64c-415e-98c6-fc0e5c799fe0" containerID="c741f63c3c18c70fb74a3e1cc4574a0434a01a3203abe1ccedcf63dda5493f22" exitCode=0 Jan 22 11:52:39 crc kubenswrapper[5120]: I0122 11:52:39.167864 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-dpf6p" event={"ID":"17d1692e-e64c-415e-98c6-fc0e5c799fe0","Type":"ContainerDied","Data":"c741f63c3c18c70fb74a3e1cc4574a0434a01a3203abe1ccedcf63dda5493f22"} Jan 22 11:52:39 crc kubenswrapper[5120]: I0122 11:52:39.168888 5120 scope.go:117] "RemoveContainer" containerID="c741f63c3c18c70fb74a3e1cc4574a0434a01a3203abe1ccedcf63dda5493f22" Jan 22 11:52:39 crc kubenswrapper[5120]: I0122 11:52:39.875748 5120 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-547dbd544d-dpf6p" Jan 22 11:52:40 crc kubenswrapper[5120]: I0122 11:52:40.177141 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-dpf6p" event={"ID":"17d1692e-e64c-415e-98c6-fc0e5c799fe0","Type":"ContainerStarted","Data":"c1ed7efd3687998dabc1724dada2cb0471f8f9f4ce329e4f622a91d9529a5b30"} Jan 22 11:52:40 crc kubenswrapper[5120]: I0122 11:52:40.177536 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-dpf6p" Jan 22 11:52:40 crc kubenswrapper[5120]: I0122 11:52:40.180385 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-dpf6p" Jan 22 11:52:41 crc kubenswrapper[5120]: I0122 11:52:41.769340 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Jan 22 11:52:43 crc kubenswrapper[5120]: I0122 11:52:43.225685 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Jan 22 11:52:44 crc kubenswrapper[5120]: I0122 11:52:44.140893 5120 ???:1] "http: TLS handshake error from 192.168.126.11:43860: no serving certificate available for the kubelet" Jan 22 11:52:45 crc kubenswrapper[5120]: I0122 11:52:45.495169 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Jan 22 11:52:45 crc kubenswrapper[5120]: I0122 11:52:45.759373 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 22 11:52:45 crc kubenswrapper[5120]: I0122 11:52:45.761381 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 22 11:52:48 crc kubenswrapper[5120]: I0122 11:52:48.070393 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Jan 22 11:52:48 crc kubenswrapper[5120]: I0122 11:52:48.477502 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Jan 22 11:52:48 crc kubenswrapper[5120]: I0122 11:52:48.623592 5120 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Jan 22 11:52:53 crc kubenswrapper[5120]: I0122 11:52:53.157134 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Jan 22 11:52:53 crc kubenswrapper[5120]: I0122 11:52:53.180790 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Jan 22 11:52:53 crc kubenswrapper[5120]: I0122 11:52:53.336859 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Jan 22 11:52:53 crc kubenswrapper[5120]: I0122 11:52:53.817484 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-859f9fbf8c-djk86"] Jan 22 11:52:54 crc kubenswrapper[5120]: I0122 11:52:54.008364 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Jan 22 11:52:54 crc kubenswrapper[5120]: I0122 11:52:54.026743 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-859f9fbf8c-djk86"] Jan 22 11:52:54 crc kubenswrapper[5120]: I0122 11:52:54.032232 5120 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 11:52:54 crc kubenswrapper[5120]: I0122 11:52:54.281896 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" event={"ID":"964936ed-c6ba-45f2-9ccd-871c228a1383","Type":"ContainerStarted","Data":"fc4cee474f1ff19682c4f444f2fabd3665b45c2128dfba20159e306ed490cf50"} Jan 22 11:52:55 crc kubenswrapper[5120]: I0122 11:52:55.290547 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" event={"ID":"964936ed-c6ba-45f2-9ccd-871c228a1383","Type":"ContainerStarted","Data":"f3cddea63f64dea9bbc8955882e5983e7e468173d44d256dd0e0dd293dd54ccb"} Jan 22 11:52:55 crc kubenswrapper[5120]: I0122 11:52:55.291069 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:55 crc kubenswrapper[5120]: I0122 11:52:55.299257 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:55 crc kubenswrapper[5120]: I0122 11:52:55.319259 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" podStartSLOduration=97.319239968 podStartE2EDuration="1m37.319239968s" podCreationTimestamp="2026-01-22 11:51:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:52:55.314099559 +0000 UTC m=+310.058047900" watchObservedRunningTime="2026-01-22 11:52:55.319239968 +0000 UTC m=+310.063188309" Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.420541 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-xw8v9"] Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.421047 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-65b6cccf98-xw8v9" podUID="007c14e3-9fa4-44aa-8d05-a57c4dc222a1" containerName="controller-manager" containerID="cri-o://0d2967cf10b1c44b4095ca653bbf386f8d585bd4d3078507706744e938981761" gracePeriod=30 Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.439640 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-fzgnb"] Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.440777 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fzgnb" podUID="e36a1cae-0915-45b1-abf9-2f44c78f3306" containerName="route-controller-manager" containerID="cri-o://5e977e10172d967f197ee04cf8a94ca2d54059ca15c4d92be05592d36a35cddb" gracePeriod=30 Jan 22 11:52:58 crc kubenswrapper[5120]: E0122 11:52:58.499165 5120 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode36a1cae_0915_45b1_abf9_2f44c78f3306.slice/crio-5e977e10172d967f197ee04cf8a94ca2d54059ca15c4d92be05592d36a35cddb.scope\": RecentStats: unable to find data in memory cache]" Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.855082 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-xw8v9" Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.889731 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-dfd68485-lpx9q"] Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.890349 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.890366 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.890388 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="007c14e3-9fa4-44aa-8d05-a57c4dc222a1" containerName="controller-manager" Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.890396 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="007c14e3-9fa4-44aa-8d05-a57c4dc222a1" containerName="controller-manager" Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.890502 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="007c14e3-9fa4-44aa-8d05-a57c4dc222a1" containerName="controller-manager" Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.890515 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.896005 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-dfd68485-lpx9q" Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.905088 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-dfd68485-lpx9q"] Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.920000 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fzgnb" Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.965296 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c6c48458c-zs5ct"] Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.966169 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e36a1cae-0915-45b1-abf9-2f44c78f3306" containerName="route-controller-manager" Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.966194 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="e36a1cae-0915-45b1-abf9-2f44c78f3306" containerName="route-controller-manager" Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.966304 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="e36a1cae-0915-45b1-abf9-2f44c78f3306" containerName="route-controller-manager" Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.971670 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/007c14e3-9fa4-44aa-8d05-a57c4dc222a1-config\") pod \"007c14e3-9fa4-44aa-8d05-a57c4dc222a1\" (UID: \"007c14e3-9fa4-44aa-8d05-a57c4dc222a1\") " Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.971944 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/007c14e3-9fa4-44aa-8d05-a57c4dc222a1-proxy-ca-bundles\") pod \"007c14e3-9fa4-44aa-8d05-a57c4dc222a1\" (UID: \"007c14e3-9fa4-44aa-8d05-a57c4dc222a1\") " Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.972192 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzcsr\" (UniqueName: \"kubernetes.io/projected/007c14e3-9fa4-44aa-8d05-a57c4dc222a1-kube-api-access-kzcsr\") pod \"007c14e3-9fa4-44aa-8d05-a57c4dc222a1\" (UID: \"007c14e3-9fa4-44aa-8d05-a57c4dc222a1\") " Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.972243 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/007c14e3-9fa4-44aa-8d05-a57c4dc222a1-client-ca\") pod \"007c14e3-9fa4-44aa-8d05-a57c4dc222a1\" (UID: \"007c14e3-9fa4-44aa-8d05-a57c4dc222a1\") " Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.972327 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/007c14e3-9fa4-44aa-8d05-a57c4dc222a1-serving-cert\") pod \"007c14e3-9fa4-44aa-8d05-a57c4dc222a1\" (UID: \"007c14e3-9fa4-44aa-8d05-a57c4dc222a1\") " Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.972365 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/007c14e3-9fa4-44aa-8d05-a57c4dc222a1-tmp\") pod \"007c14e3-9fa4-44aa-8d05-a57c4dc222a1\" (UID: \"007c14e3-9fa4-44aa-8d05-a57c4dc222a1\") " Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.972517 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2d98257c-df7b-48f7-b8c0-358847c5b9ce-client-ca\") pod \"controller-manager-dfd68485-lpx9q\" (UID: \"2d98257c-df7b-48f7-b8c0-358847c5b9ce\") " pod="openshift-controller-manager/controller-manager-dfd68485-lpx9q" Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.972615 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgd8l\" (UniqueName: \"kubernetes.io/projected/2d98257c-df7b-48f7-b8c0-358847c5b9ce-kube-api-access-cgd8l\") pod \"controller-manager-dfd68485-lpx9q\" (UID: \"2d98257c-df7b-48f7-b8c0-358847c5b9ce\") " pod="openshift-controller-manager/controller-manager-dfd68485-lpx9q" Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.972765 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d98257c-df7b-48f7-b8c0-358847c5b9ce-config\") pod \"controller-manager-dfd68485-lpx9q\" (UID: \"2d98257c-df7b-48f7-b8c0-358847c5b9ce\") " pod="openshift-controller-manager/controller-manager-dfd68485-lpx9q" Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.972772 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/007c14e3-9fa4-44aa-8d05-a57c4dc222a1-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "007c14e3-9fa4-44aa-8d05-a57c4dc222a1" (UID: "007c14e3-9fa4-44aa-8d05-a57c4dc222a1"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.972809 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2d98257c-df7b-48f7-b8c0-358847c5b9ce-proxy-ca-bundles\") pod \"controller-manager-dfd68485-lpx9q\" (UID: \"2d98257c-df7b-48f7-b8c0-358847c5b9ce\") " pod="openshift-controller-manager/controller-manager-dfd68485-lpx9q" Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.972879 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/007c14e3-9fa4-44aa-8d05-a57c4dc222a1-config" (OuterVolumeSpecName: "config") pod "007c14e3-9fa4-44aa-8d05-a57c4dc222a1" (UID: "007c14e3-9fa4-44aa-8d05-a57c4dc222a1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.972886 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2d98257c-df7b-48f7-b8c0-358847c5b9ce-tmp\") pod \"controller-manager-dfd68485-lpx9q\" (UID: \"2d98257c-df7b-48f7-b8c0-358847c5b9ce\") " pod="openshift-controller-manager/controller-manager-dfd68485-lpx9q" Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.973060 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/007c14e3-9fa4-44aa-8d05-a57c4dc222a1-tmp" (OuterVolumeSpecName: "tmp") pod "007c14e3-9fa4-44aa-8d05-a57c4dc222a1" (UID: "007c14e3-9fa4-44aa-8d05-a57c4dc222a1"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.973372 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d98257c-df7b-48f7-b8c0-358847c5b9ce-serving-cert\") pod \"controller-manager-dfd68485-lpx9q\" (UID: \"2d98257c-df7b-48f7-b8c0-358847c5b9ce\") " pod="openshift-controller-manager/controller-manager-dfd68485-lpx9q" Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.973554 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/007c14e3-9fa4-44aa-8d05-a57c4dc222a1-client-ca" (OuterVolumeSpecName: "client-ca") pod "007c14e3-9fa4-44aa-8d05-a57c4dc222a1" (UID: "007c14e3-9fa4-44aa-8d05-a57c4dc222a1"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.973670 5120 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/007c14e3-9fa4-44aa-8d05-a57c4dc222a1-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.973693 5120 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/007c14e3-9fa4-44aa-8d05-a57c4dc222a1-tmp\") on node \"crc\" DevicePath \"\"" Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.973706 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/007c14e3-9fa4-44aa-8d05-a57c4dc222a1-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.974240 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c6c48458c-zs5ct"] Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.974423 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-zs5ct" Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.979709 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/007c14e3-9fa4-44aa-8d05-a57c4dc222a1-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "007c14e3-9fa4-44aa-8d05-a57c4dc222a1" (UID: "007c14e3-9fa4-44aa-8d05-a57c4dc222a1"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.979898 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/007c14e3-9fa4-44aa-8d05-a57c4dc222a1-kube-api-access-kzcsr" (OuterVolumeSpecName: "kube-api-access-kzcsr") pod "007c14e3-9fa4-44aa-8d05-a57c4dc222a1" (UID: "007c14e3-9fa4-44aa-8d05-a57c4dc222a1"). InnerVolumeSpecName "kube-api-access-kzcsr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.075178 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjndr\" (UniqueName: \"kubernetes.io/projected/e36a1cae-0915-45b1-abf9-2f44c78f3306-kube-api-access-wjndr\") pod \"e36a1cae-0915-45b1-abf9-2f44c78f3306\" (UID: \"e36a1cae-0915-45b1-abf9-2f44c78f3306\") " Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.075235 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e36a1cae-0915-45b1-abf9-2f44c78f3306-tmp\") pod \"e36a1cae-0915-45b1-abf9-2f44c78f3306\" (UID: \"e36a1cae-0915-45b1-abf9-2f44c78f3306\") " Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.075277 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e36a1cae-0915-45b1-abf9-2f44c78f3306-config\") pod \"e36a1cae-0915-45b1-abf9-2f44c78f3306\" (UID: \"e36a1cae-0915-45b1-abf9-2f44c78f3306\") " Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.075425 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e36a1cae-0915-45b1-abf9-2f44c78f3306-client-ca\") pod \"e36a1cae-0915-45b1-abf9-2f44c78f3306\" (UID: \"e36a1cae-0915-45b1-abf9-2f44c78f3306\") " Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.075473 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e36a1cae-0915-45b1-abf9-2f44c78f3306-serving-cert\") pod \"e36a1cae-0915-45b1-abf9-2f44c78f3306\" (UID: \"e36a1cae-0915-45b1-abf9-2f44c78f3306\") " Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.075575 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cgd8l\" (UniqueName: \"kubernetes.io/projected/2d98257c-df7b-48f7-b8c0-358847c5b9ce-kube-api-access-cgd8l\") pod \"controller-manager-dfd68485-lpx9q\" (UID: \"2d98257c-df7b-48f7-b8c0-358847c5b9ce\") " pod="openshift-controller-manager/controller-manager-dfd68485-lpx9q" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.075610 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/70f53da5-8baf-4c45-8bb7-cf3fce499981-tmp\") pod \"route-controller-manager-5c6c48458c-zs5ct\" (UID: \"70f53da5-8baf-4c45-8bb7-cf3fce499981\") " pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-zs5ct" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.075658 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d98257c-df7b-48f7-b8c0-358847c5b9ce-config\") pod \"controller-manager-dfd68485-lpx9q\" (UID: \"2d98257c-df7b-48f7-b8c0-358847c5b9ce\") " pod="openshift-controller-manager/controller-manager-dfd68485-lpx9q" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.075683 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2d98257c-df7b-48f7-b8c0-358847c5b9ce-proxy-ca-bundles\") pod \"controller-manager-dfd68485-lpx9q\" (UID: \"2d98257c-df7b-48f7-b8c0-358847c5b9ce\") " pod="openshift-controller-manager/controller-manager-dfd68485-lpx9q" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.075710 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70f53da5-8baf-4c45-8bb7-cf3fce499981-serving-cert\") pod \"route-controller-manager-5c6c48458c-zs5ct\" (UID: \"70f53da5-8baf-4c45-8bb7-cf3fce499981\") " pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-zs5ct" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.075740 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2d98257c-df7b-48f7-b8c0-358847c5b9ce-tmp\") pod \"controller-manager-dfd68485-lpx9q\" (UID: \"2d98257c-df7b-48f7-b8c0-358847c5b9ce\") " pod="openshift-controller-manager/controller-manager-dfd68485-lpx9q" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.075778 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrdtp\" (UniqueName: \"kubernetes.io/projected/70f53da5-8baf-4c45-8bb7-cf3fce499981-kube-api-access-jrdtp\") pod \"route-controller-manager-5c6c48458c-zs5ct\" (UID: \"70f53da5-8baf-4c45-8bb7-cf3fce499981\") " pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-zs5ct" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.075798 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d98257c-df7b-48f7-b8c0-358847c5b9ce-serving-cert\") pod \"controller-manager-dfd68485-lpx9q\" (UID: \"2d98257c-df7b-48f7-b8c0-358847c5b9ce\") " pod="openshift-controller-manager/controller-manager-dfd68485-lpx9q" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.075819 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70f53da5-8baf-4c45-8bb7-cf3fce499981-config\") pod \"route-controller-manager-5c6c48458c-zs5ct\" (UID: \"70f53da5-8baf-4c45-8bb7-cf3fce499981\") " pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-zs5ct" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.075887 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/70f53da5-8baf-4c45-8bb7-cf3fce499981-client-ca\") pod \"route-controller-manager-5c6c48458c-zs5ct\" (UID: \"70f53da5-8baf-4c45-8bb7-cf3fce499981\") " pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-zs5ct" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.075912 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2d98257c-df7b-48f7-b8c0-358847c5b9ce-client-ca\") pod \"controller-manager-dfd68485-lpx9q\" (UID: \"2d98257c-df7b-48f7-b8c0-358847c5b9ce\") " pod="openshift-controller-manager/controller-manager-dfd68485-lpx9q" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.075984 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kzcsr\" (UniqueName: \"kubernetes.io/projected/007c14e3-9fa4-44aa-8d05-a57c4dc222a1-kube-api-access-kzcsr\") on node \"crc\" DevicePath \"\"" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.076000 5120 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/007c14e3-9fa4-44aa-8d05-a57c4dc222a1-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.076013 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/007c14e3-9fa4-44aa-8d05-a57c4dc222a1-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.077172 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2d98257c-df7b-48f7-b8c0-358847c5b9ce-client-ca\") pod \"controller-manager-dfd68485-lpx9q\" (UID: \"2d98257c-df7b-48f7-b8c0-358847c5b9ce\") " pod="openshift-controller-manager/controller-manager-dfd68485-lpx9q" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.077253 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2d98257c-df7b-48f7-b8c0-358847c5b9ce-tmp\") pod \"controller-manager-dfd68485-lpx9q\" (UID: \"2d98257c-df7b-48f7-b8c0-358847c5b9ce\") " pod="openshift-controller-manager/controller-manager-dfd68485-lpx9q" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.077719 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e36a1cae-0915-45b1-abf9-2f44c78f3306-tmp" (OuterVolumeSpecName: "tmp") pod "e36a1cae-0915-45b1-abf9-2f44c78f3306" (UID: "e36a1cae-0915-45b1-abf9-2f44c78f3306"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.078296 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d98257c-df7b-48f7-b8c0-358847c5b9ce-config\") pod \"controller-manager-dfd68485-lpx9q\" (UID: \"2d98257c-df7b-48f7-b8c0-358847c5b9ce\") " pod="openshift-controller-manager/controller-manager-dfd68485-lpx9q" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.078368 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e36a1cae-0915-45b1-abf9-2f44c78f3306-client-ca" (OuterVolumeSpecName: "client-ca") pod "e36a1cae-0915-45b1-abf9-2f44c78f3306" (UID: "e36a1cae-0915-45b1-abf9-2f44c78f3306"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.078404 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e36a1cae-0915-45b1-abf9-2f44c78f3306-config" (OuterVolumeSpecName: "config") pod "e36a1cae-0915-45b1-abf9-2f44c78f3306" (UID: "e36a1cae-0915-45b1-abf9-2f44c78f3306"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.078693 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2d98257c-df7b-48f7-b8c0-358847c5b9ce-proxy-ca-bundles\") pod \"controller-manager-dfd68485-lpx9q\" (UID: \"2d98257c-df7b-48f7-b8c0-358847c5b9ce\") " pod="openshift-controller-manager/controller-manager-dfd68485-lpx9q" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.094799 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e36a1cae-0915-45b1-abf9-2f44c78f3306-kube-api-access-wjndr" (OuterVolumeSpecName: "kube-api-access-wjndr") pod "e36a1cae-0915-45b1-abf9-2f44c78f3306" (UID: "e36a1cae-0915-45b1-abf9-2f44c78f3306"). InnerVolumeSpecName "kube-api-access-wjndr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.095217 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e36a1cae-0915-45b1-abf9-2f44c78f3306-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e36a1cae-0915-45b1-abf9-2f44c78f3306" (UID: "e36a1cae-0915-45b1-abf9-2f44c78f3306"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.095326 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d98257c-df7b-48f7-b8c0-358847c5b9ce-serving-cert\") pod \"controller-manager-dfd68485-lpx9q\" (UID: \"2d98257c-df7b-48f7-b8c0-358847c5b9ce\") " pod="openshift-controller-manager/controller-manager-dfd68485-lpx9q" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.104515 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgd8l\" (UniqueName: \"kubernetes.io/projected/2d98257c-df7b-48f7-b8c0-358847c5b9ce-kube-api-access-cgd8l\") pod \"controller-manager-dfd68485-lpx9q\" (UID: \"2d98257c-df7b-48f7-b8c0-358847c5b9ce\") " pod="openshift-controller-manager/controller-manager-dfd68485-lpx9q" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.177298 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jrdtp\" (UniqueName: \"kubernetes.io/projected/70f53da5-8baf-4c45-8bb7-cf3fce499981-kube-api-access-jrdtp\") pod \"route-controller-manager-5c6c48458c-zs5ct\" (UID: \"70f53da5-8baf-4c45-8bb7-cf3fce499981\") " pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-zs5ct" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.177560 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70f53da5-8baf-4c45-8bb7-cf3fce499981-config\") pod \"route-controller-manager-5c6c48458c-zs5ct\" (UID: \"70f53da5-8baf-4c45-8bb7-cf3fce499981\") " pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-zs5ct" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.177715 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/70f53da5-8baf-4c45-8bb7-cf3fce499981-client-ca\") pod \"route-controller-manager-5c6c48458c-zs5ct\" (UID: \"70f53da5-8baf-4c45-8bb7-cf3fce499981\") " pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-zs5ct" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.177822 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/70f53da5-8baf-4c45-8bb7-cf3fce499981-tmp\") pod \"route-controller-manager-5c6c48458c-zs5ct\" (UID: \"70f53da5-8baf-4c45-8bb7-cf3fce499981\") " pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-zs5ct" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.177941 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70f53da5-8baf-4c45-8bb7-cf3fce499981-serving-cert\") pod \"route-controller-manager-5c6c48458c-zs5ct\" (UID: \"70f53da5-8baf-4c45-8bb7-cf3fce499981\") " pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-zs5ct" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.178088 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e36a1cae-0915-45b1-abf9-2f44c78f3306-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.178157 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wjndr\" (UniqueName: \"kubernetes.io/projected/e36a1cae-0915-45b1-abf9-2f44c78f3306-kube-api-access-wjndr\") on node \"crc\" DevicePath \"\"" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.178235 5120 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e36a1cae-0915-45b1-abf9-2f44c78f3306-tmp\") on node \"crc\" DevicePath \"\"" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.178323 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e36a1cae-0915-45b1-abf9-2f44c78f3306-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.178405 5120 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e36a1cae-0915-45b1-abf9-2f44c78f3306-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.178342 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/70f53da5-8baf-4c45-8bb7-cf3fce499981-tmp\") pod \"route-controller-manager-5c6c48458c-zs5ct\" (UID: \"70f53da5-8baf-4c45-8bb7-cf3fce499981\") " pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-zs5ct" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.178562 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/70f53da5-8baf-4c45-8bb7-cf3fce499981-client-ca\") pod \"route-controller-manager-5c6c48458c-zs5ct\" (UID: \"70f53da5-8baf-4c45-8bb7-cf3fce499981\") " pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-zs5ct" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.178776 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70f53da5-8baf-4c45-8bb7-cf3fce499981-config\") pod \"route-controller-manager-5c6c48458c-zs5ct\" (UID: \"70f53da5-8baf-4c45-8bb7-cf3fce499981\") " pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-zs5ct" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.181421 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70f53da5-8baf-4c45-8bb7-cf3fce499981-serving-cert\") pod \"route-controller-manager-5c6c48458c-zs5ct\" (UID: \"70f53da5-8baf-4c45-8bb7-cf3fce499981\") " pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-zs5ct" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.194899 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrdtp\" (UniqueName: \"kubernetes.io/projected/70f53da5-8baf-4c45-8bb7-cf3fce499981-kube-api-access-jrdtp\") pod \"route-controller-manager-5c6c48458c-zs5ct\" (UID: \"70f53da5-8baf-4c45-8bb7-cf3fce499981\") " pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-zs5ct" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.228836 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-dfd68485-lpx9q" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.291779 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-zs5ct" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.321293 5120 generic.go:358] "Generic (PLEG): container finished" podID="007c14e3-9fa4-44aa-8d05-a57c4dc222a1" containerID="0d2967cf10b1c44b4095ca653bbf386f8d585bd4d3078507706744e938981761" exitCode=0 Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.321905 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-xw8v9" event={"ID":"007c14e3-9fa4-44aa-8d05-a57c4dc222a1","Type":"ContainerDied","Data":"0d2967cf10b1c44b4095ca653bbf386f8d585bd4d3078507706744e938981761"} Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.322139 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-xw8v9" event={"ID":"007c14e3-9fa4-44aa-8d05-a57c4dc222a1","Type":"ContainerDied","Data":"b06d71ff154da6cdba043abe6374515e955691a895c872e8885cdaf9984417d0"} Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.322161 5120 scope.go:117] "RemoveContainer" containerID="0d2967cf10b1c44b4095ca653bbf386f8d585bd4d3078507706744e938981761" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.322382 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-xw8v9" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.329117 5120 generic.go:358] "Generic (PLEG): container finished" podID="e36a1cae-0915-45b1-abf9-2f44c78f3306" containerID="5e977e10172d967f197ee04cf8a94ca2d54059ca15c4d92be05592d36a35cddb" exitCode=0 Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.329270 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fzgnb" event={"ID":"e36a1cae-0915-45b1-abf9-2f44c78f3306","Type":"ContainerDied","Data":"5e977e10172d967f197ee04cf8a94ca2d54059ca15c4d92be05592d36a35cddb"} Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.329302 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fzgnb" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.329343 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fzgnb" event={"ID":"e36a1cae-0915-45b1-abf9-2f44c78f3306","Type":"ContainerDied","Data":"2d59b64b6f345357f2908b0217e759f74cb8c56e84767dbef6ac59043f972d83"} Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.368647 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-xw8v9"] Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.380493 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-xw8v9"] Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.381039 5120 scope.go:117] "RemoveContainer" containerID="0d2967cf10b1c44b4095ca653bbf386f8d585bd4d3078507706744e938981761" Jan 22 11:52:59 crc kubenswrapper[5120]: E0122 11:52:59.381848 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d2967cf10b1c44b4095ca653bbf386f8d585bd4d3078507706744e938981761\": container with ID starting with 0d2967cf10b1c44b4095ca653bbf386f8d585bd4d3078507706744e938981761 not found: ID does not exist" containerID="0d2967cf10b1c44b4095ca653bbf386f8d585bd4d3078507706744e938981761" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.381913 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d2967cf10b1c44b4095ca653bbf386f8d585bd4d3078507706744e938981761"} err="failed to get container status \"0d2967cf10b1c44b4095ca653bbf386f8d585bd4d3078507706744e938981761\": rpc error: code = NotFound desc = could not find container \"0d2967cf10b1c44b4095ca653bbf386f8d585bd4d3078507706744e938981761\": container with ID starting with 0d2967cf10b1c44b4095ca653bbf386f8d585bd4d3078507706744e938981761 not found: ID does not exist" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.382078 5120 scope.go:117] "RemoveContainer" containerID="5e977e10172d967f197ee04cf8a94ca2d54059ca15c4d92be05592d36a35cddb" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.385797 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-fzgnb"] Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.393544 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-fzgnb"] Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.401452 5120 scope.go:117] "RemoveContainer" containerID="5e977e10172d967f197ee04cf8a94ca2d54059ca15c4d92be05592d36a35cddb" Jan 22 11:52:59 crc kubenswrapper[5120]: E0122 11:52:59.403608 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e977e10172d967f197ee04cf8a94ca2d54059ca15c4d92be05592d36a35cddb\": container with ID starting with 5e977e10172d967f197ee04cf8a94ca2d54059ca15c4d92be05592d36a35cddb not found: ID does not exist" containerID="5e977e10172d967f197ee04cf8a94ca2d54059ca15c4d92be05592d36a35cddb" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.403658 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e977e10172d967f197ee04cf8a94ca2d54059ca15c4d92be05592d36a35cddb"} err="failed to get container status \"5e977e10172d967f197ee04cf8a94ca2d54059ca15c4d92be05592d36a35cddb\": rpc error: code = NotFound desc = could not find container \"5e977e10172d967f197ee04cf8a94ca2d54059ca15c4d92be05592d36a35cddb\": container with ID starting with 5e977e10172d967f197ee04cf8a94ca2d54059ca15c4d92be05592d36a35cddb not found: ID does not exist" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.439351 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-dfd68485-lpx9q"] Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.539061 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c6c48458c-zs5ct"] Jan 22 11:52:59 crc kubenswrapper[5120]: W0122 11:52:59.546190 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70f53da5_8baf_4c45_8bb7_cf3fce499981.slice/crio-421d4a2c35c82972597fabfeafbb19f6b05f0660346cc32082e5cec7bbf4da1c WatchSource:0}: Error finding container 421d4a2c35c82972597fabfeafbb19f6b05f0660346cc32082e5cec7bbf4da1c: Status 404 returned error can't find the container with id 421d4a2c35c82972597fabfeafbb19f6b05f0660346cc32082e5cec7bbf4da1c Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.580106 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="007c14e3-9fa4-44aa-8d05-a57c4dc222a1" path="/var/lib/kubelet/pods/007c14e3-9fa4-44aa-8d05-a57c4dc222a1/volumes" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.581014 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e36a1cae-0915-45b1-abf9-2f44c78f3306" path="/var/lib/kubelet/pods/e36a1cae-0915-45b1-abf9-2f44c78f3306/volumes" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.701082 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.773259 5120 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-fzgnb container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": context deadline exceeded" start-of-body= Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.773381 5120 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fzgnb" podUID="e36a1cae-0915-45b1-abf9-2f44c78f3306" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": context deadline exceeded" Jan 22 11:53:00 crc kubenswrapper[5120]: I0122 11:53:00.339839 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-zs5ct" event={"ID":"70f53da5-8baf-4c45-8bb7-cf3fce499981","Type":"ContainerStarted","Data":"d58efaf2885f6e1810bd75ff4f3173e05c971c3724a53583f83eef98ffa75d7a"} Jan 22 11:53:00 crc kubenswrapper[5120]: I0122 11:53:00.340020 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-zs5ct" event={"ID":"70f53da5-8baf-4c45-8bb7-cf3fce499981","Type":"ContainerStarted","Data":"421d4a2c35c82972597fabfeafbb19f6b05f0660346cc32082e5cec7bbf4da1c"} Jan 22 11:53:00 crc kubenswrapper[5120]: I0122 11:53:00.340276 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-zs5ct" Jan 22 11:53:00 crc kubenswrapper[5120]: I0122 11:53:00.344688 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-dfd68485-lpx9q" event={"ID":"2d98257c-df7b-48f7-b8c0-358847c5b9ce","Type":"ContainerStarted","Data":"939b222f75022a729ec8f3d4c9a5b63dd9361453fb29d64c7c33225556190215"} Jan 22 11:53:00 crc kubenswrapper[5120]: I0122 11:53:00.345348 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-dfd68485-lpx9q" event={"ID":"2d98257c-df7b-48f7-b8c0-358847c5b9ce","Type":"ContainerStarted","Data":"00927548cf3ab5834622397c44d482db6c7268747d537a2305987359dc9ec861"} Jan 22 11:53:00 crc kubenswrapper[5120]: I0122 11:53:00.347043 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-dfd68485-lpx9q" Jan 22 11:53:00 crc kubenswrapper[5120]: I0122 11:53:00.365516 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-dfd68485-lpx9q" Jan 22 11:53:00 crc kubenswrapper[5120]: I0122 11:53:00.382270 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-zs5ct" podStartSLOduration=2.382245675 podStartE2EDuration="2.382245675s" podCreationTimestamp="2026-01-22 11:52:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:53:00.380122793 +0000 UTC m=+315.124071134" watchObservedRunningTime="2026-01-22 11:53:00.382245675 +0000 UTC m=+315.126194056" Jan 22 11:53:00 crc kubenswrapper[5120]: I0122 11:53:00.415583 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-dfd68485-lpx9q" podStartSLOduration=2.415568908 podStartE2EDuration="2.415568908s" podCreationTimestamp="2026-01-22 11:52:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:53:00.412653285 +0000 UTC m=+315.156601626" watchObservedRunningTime="2026-01-22 11:53:00.415568908 +0000 UTC m=+315.159517249" Jan 22 11:53:00 crc kubenswrapper[5120]: I0122 11:53:00.673492 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-zs5ct" Jan 22 11:53:18 crc kubenswrapper[5120]: I0122 11:53:18.432613 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c6c48458c-zs5ct"] Jan 22 11:53:18 crc kubenswrapper[5120]: I0122 11:53:18.433715 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-zs5ct" podUID="70f53da5-8baf-4c45-8bb7-cf3fce499981" containerName="route-controller-manager" containerID="cri-o://d58efaf2885f6e1810bd75ff4f3173e05c971c3724a53583f83eef98ffa75d7a" gracePeriod=30 Jan 22 11:53:18 crc kubenswrapper[5120]: I0122 11:53:18.878919 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-zs5ct" Jan 22 11:53:18 crc kubenswrapper[5120]: I0122 11:53:18.904496 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-788bc8974d-jc6gs"] Jan 22 11:53:18 crc kubenswrapper[5120]: I0122 11:53:18.905195 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="70f53da5-8baf-4c45-8bb7-cf3fce499981" containerName="route-controller-manager" Jan 22 11:53:18 crc kubenswrapper[5120]: I0122 11:53:18.905214 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="70f53da5-8baf-4c45-8bb7-cf3fce499981" containerName="route-controller-manager" Jan 22 11:53:18 crc kubenswrapper[5120]: I0122 11:53:18.905310 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="70f53da5-8baf-4c45-8bb7-cf3fce499981" containerName="route-controller-manager" Jan 22 11:53:18 crc kubenswrapper[5120]: I0122 11:53:18.912196 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-788bc8974d-jc6gs" Jan 22 11:53:18 crc kubenswrapper[5120]: I0122 11:53:18.919494 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-788bc8974d-jc6gs"] Jan 22 11:53:18 crc kubenswrapper[5120]: I0122 11:53:18.958438 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70f53da5-8baf-4c45-8bb7-cf3fce499981-config\") pod \"70f53da5-8baf-4c45-8bb7-cf3fce499981\" (UID: \"70f53da5-8baf-4c45-8bb7-cf3fce499981\") " Jan 22 11:53:18 crc kubenswrapper[5120]: I0122 11:53:18.958526 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/70f53da5-8baf-4c45-8bb7-cf3fce499981-tmp\") pod \"70f53da5-8baf-4c45-8bb7-cf3fce499981\" (UID: \"70f53da5-8baf-4c45-8bb7-cf3fce499981\") " Jan 22 11:53:18 crc kubenswrapper[5120]: I0122 11:53:18.958561 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70f53da5-8baf-4c45-8bb7-cf3fce499981-serving-cert\") pod \"70f53da5-8baf-4c45-8bb7-cf3fce499981\" (UID: \"70f53da5-8baf-4c45-8bb7-cf3fce499981\") " Jan 22 11:53:18 crc kubenswrapper[5120]: I0122 11:53:18.958616 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/70f53da5-8baf-4c45-8bb7-cf3fce499981-client-ca\") pod \"70f53da5-8baf-4c45-8bb7-cf3fce499981\" (UID: \"70f53da5-8baf-4c45-8bb7-cf3fce499981\") " Jan 22 11:53:18 crc kubenswrapper[5120]: I0122 11:53:18.958687 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrdtp\" (UniqueName: \"kubernetes.io/projected/70f53da5-8baf-4c45-8bb7-cf3fce499981-kube-api-access-jrdtp\") pod \"70f53da5-8baf-4c45-8bb7-cf3fce499981\" (UID: \"70f53da5-8baf-4c45-8bb7-cf3fce499981\") " Jan 22 11:53:18 crc kubenswrapper[5120]: I0122 11:53:18.960312 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70f53da5-8baf-4c45-8bb7-cf3fce499981-tmp" (OuterVolumeSpecName: "tmp") pod "70f53da5-8baf-4c45-8bb7-cf3fce499981" (UID: "70f53da5-8baf-4c45-8bb7-cf3fce499981"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:53:18 crc kubenswrapper[5120]: I0122 11:53:18.961008 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70f53da5-8baf-4c45-8bb7-cf3fce499981-config" (OuterVolumeSpecName: "config") pod "70f53da5-8baf-4c45-8bb7-cf3fce499981" (UID: "70f53da5-8baf-4c45-8bb7-cf3fce499981"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:53:18 crc kubenswrapper[5120]: I0122 11:53:18.964574 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70f53da5-8baf-4c45-8bb7-cf3fce499981-kube-api-access-jrdtp" (OuterVolumeSpecName: "kube-api-access-jrdtp") pod "70f53da5-8baf-4c45-8bb7-cf3fce499981" (UID: "70f53da5-8baf-4c45-8bb7-cf3fce499981"). InnerVolumeSpecName "kube-api-access-jrdtp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:53:18 crc kubenswrapper[5120]: I0122 11:53:18.965332 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70f53da5-8baf-4c45-8bb7-cf3fce499981-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "70f53da5-8baf-4c45-8bb7-cf3fce499981" (UID: "70f53da5-8baf-4c45-8bb7-cf3fce499981"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:53:18 crc kubenswrapper[5120]: I0122 11:53:18.965785 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70f53da5-8baf-4c45-8bb7-cf3fce499981-client-ca" (OuterVolumeSpecName: "client-ca") pod "70f53da5-8baf-4c45-8bb7-cf3fce499981" (UID: "70f53da5-8baf-4c45-8bb7-cf3fce499981"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:53:19 crc kubenswrapper[5120]: I0122 11:53:19.060466 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f26ed13e-d255-473f-ad8e-d3511aa1e179-config\") pod \"route-controller-manager-788bc8974d-jc6gs\" (UID: \"f26ed13e-d255-473f-ad8e-d3511aa1e179\") " pod="openshift-route-controller-manager/route-controller-manager-788bc8974d-jc6gs" Jan 22 11:53:19 crc kubenswrapper[5120]: I0122 11:53:19.060539 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f26ed13e-d255-473f-ad8e-d3511aa1e179-tmp\") pod \"route-controller-manager-788bc8974d-jc6gs\" (UID: \"f26ed13e-d255-473f-ad8e-d3511aa1e179\") " pod="openshift-route-controller-manager/route-controller-manager-788bc8974d-jc6gs" Jan 22 11:53:19 crc kubenswrapper[5120]: I0122 11:53:19.060637 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f26ed13e-d255-473f-ad8e-d3511aa1e179-serving-cert\") pod \"route-controller-manager-788bc8974d-jc6gs\" (UID: \"f26ed13e-d255-473f-ad8e-d3511aa1e179\") " pod="openshift-route-controller-manager/route-controller-manager-788bc8974d-jc6gs" Jan 22 11:53:19 crc kubenswrapper[5120]: I0122 11:53:19.060778 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzhml\" (UniqueName: \"kubernetes.io/projected/f26ed13e-d255-473f-ad8e-d3511aa1e179-kube-api-access-hzhml\") pod \"route-controller-manager-788bc8974d-jc6gs\" (UID: \"f26ed13e-d255-473f-ad8e-d3511aa1e179\") " pod="openshift-route-controller-manager/route-controller-manager-788bc8974d-jc6gs" Jan 22 11:53:19 crc kubenswrapper[5120]: I0122 11:53:19.060803 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f26ed13e-d255-473f-ad8e-d3511aa1e179-client-ca\") pod \"route-controller-manager-788bc8974d-jc6gs\" (UID: \"f26ed13e-d255-473f-ad8e-d3511aa1e179\") " pod="openshift-route-controller-manager/route-controller-manager-788bc8974d-jc6gs" Jan 22 11:53:19 crc kubenswrapper[5120]: I0122 11:53:19.061042 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70f53da5-8baf-4c45-8bb7-cf3fce499981-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:53:19 crc kubenswrapper[5120]: I0122 11:53:19.061063 5120 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/70f53da5-8baf-4c45-8bb7-cf3fce499981-tmp\") on node \"crc\" DevicePath \"\"" Jan 22 11:53:19 crc kubenswrapper[5120]: I0122 11:53:19.061073 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70f53da5-8baf-4c45-8bb7-cf3fce499981-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:53:19 crc kubenswrapper[5120]: I0122 11:53:19.061085 5120 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/70f53da5-8baf-4c45-8bb7-cf3fce499981-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 11:53:19 crc kubenswrapper[5120]: I0122 11:53:19.061096 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jrdtp\" (UniqueName: \"kubernetes.io/projected/70f53da5-8baf-4c45-8bb7-cf3fce499981-kube-api-access-jrdtp\") on node \"crc\" DevicePath \"\"" Jan 22 11:53:19 crc kubenswrapper[5120]: I0122 11:53:19.162140 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f26ed13e-d255-473f-ad8e-d3511aa1e179-config\") pod \"route-controller-manager-788bc8974d-jc6gs\" (UID: \"f26ed13e-d255-473f-ad8e-d3511aa1e179\") " pod="openshift-route-controller-manager/route-controller-manager-788bc8974d-jc6gs" Jan 22 11:53:19 crc kubenswrapper[5120]: I0122 11:53:19.162197 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f26ed13e-d255-473f-ad8e-d3511aa1e179-tmp\") pod \"route-controller-manager-788bc8974d-jc6gs\" (UID: \"f26ed13e-d255-473f-ad8e-d3511aa1e179\") " pod="openshift-route-controller-manager/route-controller-manager-788bc8974d-jc6gs" Jan 22 11:53:19 crc kubenswrapper[5120]: I0122 11:53:19.162215 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f26ed13e-d255-473f-ad8e-d3511aa1e179-serving-cert\") pod \"route-controller-manager-788bc8974d-jc6gs\" (UID: \"f26ed13e-d255-473f-ad8e-d3511aa1e179\") " pod="openshift-route-controller-manager/route-controller-manager-788bc8974d-jc6gs" Jan 22 11:53:19 crc kubenswrapper[5120]: I0122 11:53:19.162748 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f26ed13e-d255-473f-ad8e-d3511aa1e179-tmp\") pod \"route-controller-manager-788bc8974d-jc6gs\" (UID: \"f26ed13e-d255-473f-ad8e-d3511aa1e179\") " pod="openshift-route-controller-manager/route-controller-manager-788bc8974d-jc6gs" Jan 22 11:53:19 crc kubenswrapper[5120]: I0122 11:53:19.163160 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hzhml\" (UniqueName: \"kubernetes.io/projected/f26ed13e-d255-473f-ad8e-d3511aa1e179-kube-api-access-hzhml\") pod \"route-controller-manager-788bc8974d-jc6gs\" (UID: \"f26ed13e-d255-473f-ad8e-d3511aa1e179\") " pod="openshift-route-controller-manager/route-controller-manager-788bc8974d-jc6gs" Jan 22 11:53:19 crc kubenswrapper[5120]: I0122 11:53:19.163248 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f26ed13e-d255-473f-ad8e-d3511aa1e179-client-ca\") pod \"route-controller-manager-788bc8974d-jc6gs\" (UID: \"f26ed13e-d255-473f-ad8e-d3511aa1e179\") " pod="openshift-route-controller-manager/route-controller-manager-788bc8974d-jc6gs" Jan 22 11:53:19 crc kubenswrapper[5120]: I0122 11:53:19.163558 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f26ed13e-d255-473f-ad8e-d3511aa1e179-config\") pod \"route-controller-manager-788bc8974d-jc6gs\" (UID: \"f26ed13e-d255-473f-ad8e-d3511aa1e179\") " pod="openshift-route-controller-manager/route-controller-manager-788bc8974d-jc6gs" Jan 22 11:53:19 crc kubenswrapper[5120]: I0122 11:53:19.164069 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f26ed13e-d255-473f-ad8e-d3511aa1e179-client-ca\") pod \"route-controller-manager-788bc8974d-jc6gs\" (UID: \"f26ed13e-d255-473f-ad8e-d3511aa1e179\") " pod="openshift-route-controller-manager/route-controller-manager-788bc8974d-jc6gs" Jan 22 11:53:19 crc kubenswrapper[5120]: I0122 11:53:19.166866 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f26ed13e-d255-473f-ad8e-d3511aa1e179-serving-cert\") pod \"route-controller-manager-788bc8974d-jc6gs\" (UID: \"f26ed13e-d255-473f-ad8e-d3511aa1e179\") " pod="openshift-route-controller-manager/route-controller-manager-788bc8974d-jc6gs" Jan 22 11:53:19 crc kubenswrapper[5120]: I0122 11:53:19.180815 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzhml\" (UniqueName: \"kubernetes.io/projected/f26ed13e-d255-473f-ad8e-d3511aa1e179-kube-api-access-hzhml\") pod \"route-controller-manager-788bc8974d-jc6gs\" (UID: \"f26ed13e-d255-473f-ad8e-d3511aa1e179\") " pod="openshift-route-controller-manager/route-controller-manager-788bc8974d-jc6gs" Jan 22 11:53:19 crc kubenswrapper[5120]: I0122 11:53:19.227677 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-788bc8974d-jc6gs" Jan 22 11:53:19 crc kubenswrapper[5120]: I0122 11:53:19.489642 5120 generic.go:358] "Generic (PLEG): container finished" podID="70f53da5-8baf-4c45-8bb7-cf3fce499981" containerID="d58efaf2885f6e1810bd75ff4f3173e05c971c3724a53583f83eef98ffa75d7a" exitCode=0 Jan 22 11:53:19 crc kubenswrapper[5120]: I0122 11:53:19.489768 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-zs5ct" event={"ID":"70f53da5-8baf-4c45-8bb7-cf3fce499981","Type":"ContainerDied","Data":"d58efaf2885f6e1810bd75ff4f3173e05c971c3724a53583f83eef98ffa75d7a"} Jan 22 11:53:19 crc kubenswrapper[5120]: I0122 11:53:19.489796 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-zs5ct" event={"ID":"70f53da5-8baf-4c45-8bb7-cf3fce499981","Type":"ContainerDied","Data":"421d4a2c35c82972597fabfeafbb19f6b05f0660346cc32082e5cec7bbf4da1c"} Jan 22 11:53:19 crc kubenswrapper[5120]: I0122 11:53:19.489814 5120 scope.go:117] "RemoveContainer" containerID="d58efaf2885f6e1810bd75ff4f3173e05c971c3724a53583f83eef98ffa75d7a" Jan 22 11:53:19 crc kubenswrapper[5120]: I0122 11:53:19.489987 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-zs5ct" Jan 22 11:53:19 crc kubenswrapper[5120]: I0122 11:53:19.513240 5120 scope.go:117] "RemoveContainer" containerID="d58efaf2885f6e1810bd75ff4f3173e05c971c3724a53583f83eef98ffa75d7a" Jan 22 11:53:19 crc kubenswrapper[5120]: E0122 11:53:19.513802 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d58efaf2885f6e1810bd75ff4f3173e05c971c3724a53583f83eef98ffa75d7a\": container with ID starting with d58efaf2885f6e1810bd75ff4f3173e05c971c3724a53583f83eef98ffa75d7a not found: ID does not exist" containerID="d58efaf2885f6e1810bd75ff4f3173e05c971c3724a53583f83eef98ffa75d7a" Jan 22 11:53:19 crc kubenswrapper[5120]: I0122 11:53:19.513882 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d58efaf2885f6e1810bd75ff4f3173e05c971c3724a53583f83eef98ffa75d7a"} err="failed to get container status \"d58efaf2885f6e1810bd75ff4f3173e05c971c3724a53583f83eef98ffa75d7a\": rpc error: code = NotFound desc = could not find container \"d58efaf2885f6e1810bd75ff4f3173e05c971c3724a53583f83eef98ffa75d7a\": container with ID starting with d58efaf2885f6e1810bd75ff4f3173e05c971c3724a53583f83eef98ffa75d7a not found: ID does not exist" Jan 22 11:53:19 crc kubenswrapper[5120]: I0122 11:53:19.531101 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c6c48458c-zs5ct"] Jan 22 11:53:19 crc kubenswrapper[5120]: I0122 11:53:19.536301 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c6c48458c-zs5ct"] Jan 22 11:53:19 crc kubenswrapper[5120]: I0122 11:53:19.580193 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70f53da5-8baf-4c45-8bb7-cf3fce499981" path="/var/lib/kubelet/pods/70f53da5-8baf-4c45-8bb7-cf3fce499981/volumes" Jan 22 11:53:19 crc kubenswrapper[5120]: I0122 11:53:19.654311 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-788bc8974d-jc6gs"] Jan 22 11:53:20 crc kubenswrapper[5120]: I0122 11:53:20.498559 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-788bc8974d-jc6gs" event={"ID":"f26ed13e-d255-473f-ad8e-d3511aa1e179","Type":"ContainerStarted","Data":"7eb85acce96453925d13155b248ebd46029bb3bd270dac5b96c63174c6559fde"} Jan 22 11:53:20 crc kubenswrapper[5120]: I0122 11:53:20.498603 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-788bc8974d-jc6gs" event={"ID":"f26ed13e-d255-473f-ad8e-d3511aa1e179","Type":"ContainerStarted","Data":"5a872ee04ef0b173cb3e82914dad12c55dc5abe3540ea805de56604227235028"} Jan 22 11:53:20 crc kubenswrapper[5120]: I0122 11:53:20.500092 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-788bc8974d-jc6gs" Jan 22 11:53:20 crc kubenswrapper[5120]: I0122 11:53:20.506441 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-788bc8974d-jc6gs" Jan 22 11:53:20 crc kubenswrapper[5120]: I0122 11:53:20.518526 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-788bc8974d-jc6gs" podStartSLOduration=2.518507843 podStartE2EDuration="2.518507843s" podCreationTimestamp="2026-01-22 11:53:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:53:20.515046505 +0000 UTC m=+335.258994866" watchObservedRunningTime="2026-01-22 11:53:20.518507843 +0000 UTC m=+335.262456184" Jan 22 11:53:58 crc kubenswrapper[5120]: I0122 11:53:58.414192 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-dfd68485-lpx9q"] Jan 22 11:53:58 crc kubenswrapper[5120]: I0122 11:53:58.415019 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-dfd68485-lpx9q" podUID="2d98257c-df7b-48f7-b8c0-358847c5b9ce" containerName="controller-manager" containerID="cri-o://939b222f75022a729ec8f3d4c9a5b63dd9361453fb29d64c7c33225556190215" gracePeriod=30 Jan 22 11:53:58 crc kubenswrapper[5120]: I0122 11:53:58.750897 5120 generic.go:358] "Generic (PLEG): container finished" podID="2d98257c-df7b-48f7-b8c0-358847c5b9ce" containerID="939b222f75022a729ec8f3d4c9a5b63dd9361453fb29d64c7c33225556190215" exitCode=0 Jan 22 11:53:58 crc kubenswrapper[5120]: I0122 11:53:58.750987 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-dfd68485-lpx9q" event={"ID":"2d98257c-df7b-48f7-b8c0-358847c5b9ce","Type":"ContainerDied","Data":"939b222f75022a729ec8f3d4c9a5b63dd9361453fb29d64c7c33225556190215"} Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.075397 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-dfd68485-lpx9q" Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.110679 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d98257c-df7b-48f7-b8c0-358847c5b9ce-serving-cert\") pod \"2d98257c-df7b-48f7-b8c0-358847c5b9ce\" (UID: \"2d98257c-df7b-48f7-b8c0-358847c5b9ce\") " Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.110741 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2d98257c-df7b-48f7-b8c0-358847c5b9ce-client-ca\") pod \"2d98257c-df7b-48f7-b8c0-358847c5b9ce\" (UID: \"2d98257c-df7b-48f7-b8c0-358847c5b9ce\") " Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.110805 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2d98257c-df7b-48f7-b8c0-358847c5b9ce-tmp\") pod \"2d98257c-df7b-48f7-b8c0-358847c5b9ce\" (UID: \"2d98257c-df7b-48f7-b8c0-358847c5b9ce\") " Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.110891 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d98257c-df7b-48f7-b8c0-358847c5b9ce-config\") pod \"2d98257c-df7b-48f7-b8c0-358847c5b9ce\" (UID: \"2d98257c-df7b-48f7-b8c0-358847c5b9ce\") " Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.110927 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2d98257c-df7b-48f7-b8c0-358847c5b9ce-proxy-ca-bundles\") pod \"2d98257c-df7b-48f7-b8c0-358847c5b9ce\" (UID: \"2d98257c-df7b-48f7-b8c0-358847c5b9ce\") " Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.110983 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgd8l\" (UniqueName: \"kubernetes.io/projected/2d98257c-df7b-48f7-b8c0-358847c5b9ce-kube-api-access-cgd8l\") pod \"2d98257c-df7b-48f7-b8c0-358847c5b9ce\" (UID: \"2d98257c-df7b-48f7-b8c0-358847c5b9ce\") " Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.111678 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5f9bcd899c-m6rqg"] Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.112405 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d98257c-df7b-48f7-b8c0-358847c5b9ce-tmp" (OuterVolumeSpecName: "tmp") pod "2d98257c-df7b-48f7-b8c0-358847c5b9ce" (UID: "2d98257c-df7b-48f7-b8c0-358847c5b9ce"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.112500 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2d98257c-df7b-48f7-b8c0-358847c5b9ce" containerName="controller-manager" Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.112530 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d98257c-df7b-48f7-b8c0-358847c5b9ce" containerName="controller-manager" Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.112651 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="2d98257c-df7b-48f7-b8c0-358847c5b9ce" containerName="controller-manager" Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.112799 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d98257c-df7b-48f7-b8c0-358847c5b9ce-config" (OuterVolumeSpecName: "config") pod "2d98257c-df7b-48f7-b8c0-358847c5b9ce" (UID: "2d98257c-df7b-48f7-b8c0-358847c5b9ce"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.112881 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d98257c-df7b-48f7-b8c0-358847c5b9ce-client-ca" (OuterVolumeSpecName: "client-ca") pod "2d98257c-df7b-48f7-b8c0-358847c5b9ce" (UID: "2d98257c-df7b-48f7-b8c0-358847c5b9ce"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.112914 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d98257c-df7b-48f7-b8c0-358847c5b9ce-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "2d98257c-df7b-48f7-b8c0-358847c5b9ce" (UID: "2d98257c-df7b-48f7-b8c0-358847c5b9ce"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.120322 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d98257c-df7b-48f7-b8c0-358847c5b9ce-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2d98257c-df7b-48f7-b8c0-358847c5b9ce" (UID: "2d98257c-df7b-48f7-b8c0-358847c5b9ce"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.120365 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d98257c-df7b-48f7-b8c0-358847c5b9ce-kube-api-access-cgd8l" (OuterVolumeSpecName: "kube-api-access-cgd8l") pod "2d98257c-df7b-48f7-b8c0-358847c5b9ce" (UID: "2d98257c-df7b-48f7-b8c0-358847c5b9ce"). InnerVolumeSpecName "kube-api-access-cgd8l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.122216 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f9bcd899c-m6rqg" Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.133810 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5f9bcd899c-m6rqg"] Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.212077 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8829h\" (UniqueName: \"kubernetes.io/projected/067ebda6-cb91-41fc-8767-fc2db64a4b9d-kube-api-access-8829h\") pod \"controller-manager-5f9bcd899c-m6rqg\" (UID: \"067ebda6-cb91-41fc-8767-fc2db64a4b9d\") " pod="openshift-controller-manager/controller-manager-5f9bcd899c-m6rqg" Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.212364 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/067ebda6-cb91-41fc-8767-fc2db64a4b9d-client-ca\") pod \"controller-manager-5f9bcd899c-m6rqg\" (UID: \"067ebda6-cb91-41fc-8767-fc2db64a4b9d\") " pod="openshift-controller-manager/controller-manager-5f9bcd899c-m6rqg" Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.212453 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/067ebda6-cb91-41fc-8767-fc2db64a4b9d-config\") pod \"controller-manager-5f9bcd899c-m6rqg\" (UID: \"067ebda6-cb91-41fc-8767-fc2db64a4b9d\") " pod="openshift-controller-manager/controller-manager-5f9bcd899c-m6rqg" Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.212529 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/067ebda6-cb91-41fc-8767-fc2db64a4b9d-serving-cert\") pod \"controller-manager-5f9bcd899c-m6rqg\" (UID: \"067ebda6-cb91-41fc-8767-fc2db64a4b9d\") " pod="openshift-controller-manager/controller-manager-5f9bcd899c-m6rqg" Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.212624 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/067ebda6-cb91-41fc-8767-fc2db64a4b9d-proxy-ca-bundles\") pod \"controller-manager-5f9bcd899c-m6rqg\" (UID: \"067ebda6-cb91-41fc-8767-fc2db64a4b9d\") " pod="openshift-controller-manager/controller-manager-5f9bcd899c-m6rqg" Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.212737 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/067ebda6-cb91-41fc-8767-fc2db64a4b9d-tmp\") pod \"controller-manager-5f9bcd899c-m6rqg\" (UID: \"067ebda6-cb91-41fc-8767-fc2db64a4b9d\") " pod="openshift-controller-manager/controller-manager-5f9bcd899c-m6rqg" Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.212851 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d98257c-df7b-48f7-b8c0-358847c5b9ce-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.212950 5120 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2d98257c-df7b-48f7-b8c0-358847c5b9ce-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.213059 5120 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2d98257c-df7b-48f7-b8c0-358847c5b9ce-tmp\") on node \"crc\" DevicePath \"\"" Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.213135 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d98257c-df7b-48f7-b8c0-358847c5b9ce-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.213212 5120 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2d98257c-df7b-48f7-b8c0-358847c5b9ce-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.213275 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cgd8l\" (UniqueName: \"kubernetes.io/projected/2d98257c-df7b-48f7-b8c0-358847c5b9ce-kube-api-access-cgd8l\") on node \"crc\" DevicePath \"\"" Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.320406 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/067ebda6-cb91-41fc-8767-fc2db64a4b9d-proxy-ca-bundles\") pod \"controller-manager-5f9bcd899c-m6rqg\" (UID: \"067ebda6-cb91-41fc-8767-fc2db64a4b9d\") " pod="openshift-controller-manager/controller-manager-5f9bcd899c-m6rqg" Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.320515 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/067ebda6-cb91-41fc-8767-fc2db64a4b9d-tmp\") pod \"controller-manager-5f9bcd899c-m6rqg\" (UID: \"067ebda6-cb91-41fc-8767-fc2db64a4b9d\") " pod="openshift-controller-manager/controller-manager-5f9bcd899c-m6rqg" Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.320579 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8829h\" (UniqueName: \"kubernetes.io/projected/067ebda6-cb91-41fc-8767-fc2db64a4b9d-kube-api-access-8829h\") pod \"controller-manager-5f9bcd899c-m6rqg\" (UID: \"067ebda6-cb91-41fc-8767-fc2db64a4b9d\") " pod="openshift-controller-manager/controller-manager-5f9bcd899c-m6rqg" Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.320615 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/067ebda6-cb91-41fc-8767-fc2db64a4b9d-client-ca\") pod \"controller-manager-5f9bcd899c-m6rqg\" (UID: \"067ebda6-cb91-41fc-8767-fc2db64a4b9d\") " pod="openshift-controller-manager/controller-manager-5f9bcd899c-m6rqg" Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.320650 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/067ebda6-cb91-41fc-8767-fc2db64a4b9d-config\") pod \"controller-manager-5f9bcd899c-m6rqg\" (UID: \"067ebda6-cb91-41fc-8767-fc2db64a4b9d\") " pod="openshift-controller-manager/controller-manager-5f9bcd899c-m6rqg" Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.320671 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/067ebda6-cb91-41fc-8767-fc2db64a4b9d-serving-cert\") pod \"controller-manager-5f9bcd899c-m6rqg\" (UID: \"067ebda6-cb91-41fc-8767-fc2db64a4b9d\") " pod="openshift-controller-manager/controller-manager-5f9bcd899c-m6rqg" Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.322144 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/067ebda6-cb91-41fc-8767-fc2db64a4b9d-proxy-ca-bundles\") pod \"controller-manager-5f9bcd899c-m6rqg\" (UID: \"067ebda6-cb91-41fc-8767-fc2db64a4b9d\") " pod="openshift-controller-manager/controller-manager-5f9bcd899c-m6rqg" Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.322362 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/067ebda6-cb91-41fc-8767-fc2db64a4b9d-client-ca\") pod \"controller-manager-5f9bcd899c-m6rqg\" (UID: \"067ebda6-cb91-41fc-8767-fc2db64a4b9d\") " pod="openshift-controller-manager/controller-manager-5f9bcd899c-m6rqg" Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.322756 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/067ebda6-cb91-41fc-8767-fc2db64a4b9d-tmp\") pod \"controller-manager-5f9bcd899c-m6rqg\" (UID: \"067ebda6-cb91-41fc-8767-fc2db64a4b9d\") " pod="openshift-controller-manager/controller-manager-5f9bcd899c-m6rqg" Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.323343 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/067ebda6-cb91-41fc-8767-fc2db64a4b9d-config\") pod \"controller-manager-5f9bcd899c-m6rqg\" (UID: \"067ebda6-cb91-41fc-8767-fc2db64a4b9d\") " pod="openshift-controller-manager/controller-manager-5f9bcd899c-m6rqg" Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.334911 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/067ebda6-cb91-41fc-8767-fc2db64a4b9d-serving-cert\") pod \"controller-manager-5f9bcd899c-m6rqg\" (UID: \"067ebda6-cb91-41fc-8767-fc2db64a4b9d\") " pod="openshift-controller-manager/controller-manager-5f9bcd899c-m6rqg" Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.343703 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8829h\" (UniqueName: \"kubernetes.io/projected/067ebda6-cb91-41fc-8767-fc2db64a4b9d-kube-api-access-8829h\") pod \"controller-manager-5f9bcd899c-m6rqg\" (UID: \"067ebda6-cb91-41fc-8767-fc2db64a4b9d\") " pod="openshift-controller-manager/controller-manager-5f9bcd899c-m6rqg" Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.468938 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f9bcd899c-m6rqg" Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.728058 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5f9bcd899c-m6rqg"] Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.759141 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f9bcd899c-m6rqg" event={"ID":"067ebda6-cb91-41fc-8767-fc2db64a4b9d","Type":"ContainerStarted","Data":"e151023485daa0f2203dc72b463333e7a9e361094dcecb2ccb635ef072777c68"} Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.760743 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-dfd68485-lpx9q" event={"ID":"2d98257c-df7b-48f7-b8c0-358847c5b9ce","Type":"ContainerDied","Data":"00927548cf3ab5834622397c44d482db6c7268747d537a2305987359dc9ec861"} Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.760797 5120 scope.go:117] "RemoveContainer" containerID="939b222f75022a729ec8f3d4c9a5b63dd9361453fb29d64c7c33225556190215" Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.760850 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-dfd68485-lpx9q" Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.798911 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-dfd68485-lpx9q"] Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.802083 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-dfd68485-lpx9q"] Jan 22 11:54:00 crc kubenswrapper[5120]: I0122 11:54:00.008162 5120 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 22 11:54:00 crc kubenswrapper[5120]: I0122 11:54:00.767380 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f9bcd899c-m6rqg" event={"ID":"067ebda6-cb91-41fc-8767-fc2db64a4b9d","Type":"ContainerStarted","Data":"79cf6e9bf1240a7859af4637d9bf77fda5cc5d5ba12c513dc41da5fda2af2411"} Jan 22 11:54:00 crc kubenswrapper[5120]: I0122 11:54:00.767697 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-5f9bcd899c-m6rqg" Jan 22 11:54:00 crc kubenswrapper[5120]: I0122 11:54:00.774705 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5f9bcd899c-m6rqg" Jan 22 11:54:00 crc kubenswrapper[5120]: I0122 11:54:00.792840 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5f9bcd899c-m6rqg" podStartSLOduration=2.792796332 podStartE2EDuration="2.792796332s" podCreationTimestamp="2026-01-22 11:53:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:54:00.790673189 +0000 UTC m=+375.534621550" watchObservedRunningTime="2026-01-22 11:54:00.792796332 +0000 UTC m=+375.536744683" Jan 22 11:54:01 crc kubenswrapper[5120]: I0122 11:54:01.579665 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d98257c-df7b-48f7-b8c0-358847c5b9ce" path="/var/lib/kubelet/pods/2d98257c-df7b-48f7-b8c0-358847c5b9ce/volumes" Jan 22 11:54:13 crc kubenswrapper[5120]: I0122 11:54:13.968944 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fztfm"] Jan 22 11:54:13 crc kubenswrapper[5120]: I0122 11:54:13.969929 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fztfm" podUID="4f669e70-10cd-47da-abc9-84be80cb5cfb" containerName="registry-server" containerID="cri-o://bb1b2eda9dfc535bf2571cb8ca9c5b1fc9f5f3199ff1d0107b99fac41ee37f68" gracePeriod=30 Jan 22 11:54:13 crc kubenswrapper[5120]: I0122 11:54:13.994372 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2q8d8"] Jan 22 11:54:13 crc kubenswrapper[5120]: I0122 11:54:13.995153 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2q8d8" podUID="ed489f01-1188-4d6f-9ed4-9618fddf1eab" containerName="registry-server" containerID="cri-o://36cd9934f20a92aa13326a062a7c371f5422564071ae91c2740e1a07898b4c02" gracePeriod=30 Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.014449 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-dpf6p"] Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.014837 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-547dbd544d-dpf6p" podUID="17d1692e-e64c-415e-98c6-fc0e5c799fe0" containerName="marketplace-operator" containerID="cri-o://c1ed7efd3687998dabc1724dada2cb0471f8f9f4ce329e4f622a91d9529a5b30" gracePeriod=30 Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.028400 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rp8qf"] Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.029025 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rp8qf" podUID="316646c5-1898-417a-8bd7-00eeadfe1243" containerName="registry-server" containerID="cri-o://043c30ef82e1600d2b7aee310c29468c886daf6f11ea610b5aafacd7353aca42" gracePeriod=30 Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.041899 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t67f7"] Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.042377 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-t67f7" podUID="df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b" containerName="registry-server" containerID="cri-o://0a55a93788e2f3a3da24ed47901056711624f745dc882f8044ade2936144a4cd" gracePeriod=30 Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.049626 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-nzw8g"] Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.066124 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-nzw8g"] Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.066352 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-nzw8g" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.153822 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/abdba773-b95f-4d73-bcb5-d36526f8e13d-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-nzw8g\" (UID: \"abdba773-b95f-4d73-bcb5-d36526f8e13d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-nzw8g" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.153864 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-669v5\" (UniqueName: \"kubernetes.io/projected/abdba773-b95f-4d73-bcb5-d36526f8e13d-kube-api-access-669v5\") pod \"marketplace-operator-547dbd544d-nzw8g\" (UID: \"abdba773-b95f-4d73-bcb5-d36526f8e13d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-nzw8g" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.153920 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/abdba773-b95f-4d73-bcb5-d36526f8e13d-tmp\") pod \"marketplace-operator-547dbd544d-nzw8g\" (UID: \"abdba773-b95f-4d73-bcb5-d36526f8e13d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-nzw8g" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.153979 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/abdba773-b95f-4d73-bcb5-d36526f8e13d-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-nzw8g\" (UID: \"abdba773-b95f-4d73-bcb5-d36526f8e13d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-nzw8g" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.255933 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/abdba773-b95f-4d73-bcb5-d36526f8e13d-tmp\") pod \"marketplace-operator-547dbd544d-nzw8g\" (UID: \"abdba773-b95f-4d73-bcb5-d36526f8e13d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-nzw8g" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.256525 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/abdba773-b95f-4d73-bcb5-d36526f8e13d-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-nzw8g\" (UID: \"abdba773-b95f-4d73-bcb5-d36526f8e13d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-nzw8g" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.256592 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/abdba773-b95f-4d73-bcb5-d36526f8e13d-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-nzw8g\" (UID: \"abdba773-b95f-4d73-bcb5-d36526f8e13d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-nzw8g" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.256610 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-669v5\" (UniqueName: \"kubernetes.io/projected/abdba773-b95f-4d73-bcb5-d36526f8e13d-kube-api-access-669v5\") pod \"marketplace-operator-547dbd544d-nzw8g\" (UID: \"abdba773-b95f-4d73-bcb5-d36526f8e13d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-nzw8g" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.256931 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/abdba773-b95f-4d73-bcb5-d36526f8e13d-tmp\") pod \"marketplace-operator-547dbd544d-nzw8g\" (UID: \"abdba773-b95f-4d73-bcb5-d36526f8e13d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-nzw8g" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.259354 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/abdba773-b95f-4d73-bcb5-d36526f8e13d-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-nzw8g\" (UID: \"abdba773-b95f-4d73-bcb5-d36526f8e13d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-nzw8g" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.266149 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/abdba773-b95f-4d73-bcb5-d36526f8e13d-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-nzw8g\" (UID: \"abdba773-b95f-4d73-bcb5-d36526f8e13d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-nzw8g" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.285030 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-669v5\" (UniqueName: \"kubernetes.io/projected/abdba773-b95f-4d73-bcb5-d36526f8e13d-kube-api-access-669v5\") pod \"marketplace-operator-547dbd544d-nzw8g\" (UID: \"abdba773-b95f-4d73-bcb5-d36526f8e13d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-nzw8g" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.502394 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-nzw8g" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.518546 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2q8d8" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.548583 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fztfm" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.560644 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed489f01-1188-4d6f-9ed4-9618fddf1eab-catalog-content\") pod \"ed489f01-1188-4d6f-9ed4-9618fddf1eab\" (UID: \"ed489f01-1188-4d6f-9ed4-9618fddf1eab\") " Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.560725 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f669e70-10cd-47da-abc9-84be80cb5cfb-utilities\") pod \"4f669e70-10cd-47da-abc9-84be80cb5cfb\" (UID: \"4f669e70-10cd-47da-abc9-84be80cb5cfb\") " Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.560797 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vctvt\" (UniqueName: \"kubernetes.io/projected/4f669e70-10cd-47da-abc9-84be80cb5cfb-kube-api-access-vctvt\") pod \"4f669e70-10cd-47da-abc9-84be80cb5cfb\" (UID: \"4f669e70-10cd-47da-abc9-84be80cb5cfb\") " Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.560832 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gztht\" (UniqueName: \"kubernetes.io/projected/ed489f01-1188-4d6f-9ed4-9618fddf1eab-kube-api-access-gztht\") pod \"ed489f01-1188-4d6f-9ed4-9618fddf1eab\" (UID: \"ed489f01-1188-4d6f-9ed4-9618fddf1eab\") " Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.560903 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f669e70-10cd-47da-abc9-84be80cb5cfb-catalog-content\") pod \"4f669e70-10cd-47da-abc9-84be80cb5cfb\" (UID: \"4f669e70-10cd-47da-abc9-84be80cb5cfb\") " Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.560974 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed489f01-1188-4d6f-9ed4-9618fddf1eab-utilities\") pod \"ed489f01-1188-4d6f-9ed4-9618fddf1eab\" (UID: \"ed489f01-1188-4d6f-9ed4-9618fddf1eab\") " Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.563133 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f669e70-10cd-47da-abc9-84be80cb5cfb-utilities" (OuterVolumeSpecName: "utilities") pod "4f669e70-10cd-47da-abc9-84be80cb5cfb" (UID: "4f669e70-10cd-47da-abc9-84be80cb5cfb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.565965 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f669e70-10cd-47da-abc9-84be80cb5cfb-kube-api-access-vctvt" (OuterVolumeSpecName: "kube-api-access-vctvt") pod "4f669e70-10cd-47da-abc9-84be80cb5cfb" (UID: "4f669e70-10cd-47da-abc9-84be80cb5cfb"). InnerVolumeSpecName "kube-api-access-vctvt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.570606 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed489f01-1188-4d6f-9ed4-9618fddf1eab-utilities" (OuterVolumeSpecName: "utilities") pod "ed489f01-1188-4d6f-9ed4-9618fddf1eab" (UID: "ed489f01-1188-4d6f-9ed4-9618fddf1eab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.580159 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed489f01-1188-4d6f-9ed4-9618fddf1eab-kube-api-access-gztht" (OuterVolumeSpecName: "kube-api-access-gztht") pod "ed489f01-1188-4d6f-9ed4-9618fddf1eab" (UID: "ed489f01-1188-4d6f-9ed4-9618fddf1eab"). InnerVolumeSpecName "kube-api-access-gztht". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.599496 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f669e70-10cd-47da-abc9-84be80cb5cfb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4f669e70-10cd-47da-abc9-84be80cb5cfb" (UID: "4f669e70-10cd-47da-abc9-84be80cb5cfb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.662276 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vctvt\" (UniqueName: \"kubernetes.io/projected/4f669e70-10cd-47da-abc9-84be80cb5cfb-kube-api-access-vctvt\") on node \"crc\" DevicePath \"\"" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.662314 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gztht\" (UniqueName: \"kubernetes.io/projected/ed489f01-1188-4d6f-9ed4-9618fddf1eab-kube-api-access-gztht\") on node \"crc\" DevicePath \"\"" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.662324 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f669e70-10cd-47da-abc9-84be80cb5cfb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.662333 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed489f01-1188-4d6f-9ed4-9618fddf1eab-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.662342 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f669e70-10cd-47da-abc9-84be80cb5cfb-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.683142 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed489f01-1188-4d6f-9ed4-9618fddf1eab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ed489f01-1188-4d6f-9ed4-9618fddf1eab" (UID: "ed489f01-1188-4d6f-9ed4-9618fddf1eab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.726663 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t67f7" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.763698 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed489f01-1188-4d6f-9ed4-9618fddf1eab-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.765319 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rp8qf" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.771624 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-dpf6p" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.866307 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzfgd\" (UniqueName: \"kubernetes.io/projected/316646c5-1898-417a-8bd7-00eeadfe1243-kube-api-access-kzfgd\") pod \"316646c5-1898-417a-8bd7-00eeadfe1243\" (UID: \"316646c5-1898-417a-8bd7-00eeadfe1243\") " Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.866371 5120 generic.go:358] "Generic (PLEG): container finished" podID="17d1692e-e64c-415e-98c6-fc0e5c799fe0" containerID="c1ed7efd3687998dabc1724dada2cb0471f8f9f4ce329e4f622a91d9529a5b30" exitCode=0 Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.866537 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lsv5h\" (UniqueName: \"kubernetes.io/projected/df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b-kube-api-access-lsv5h\") pod \"df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b\" (UID: \"df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b\") " Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.866566 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-dpf6p" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.866576 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/316646c5-1898-417a-8bd7-00eeadfe1243-catalog-content\") pod \"316646c5-1898-417a-8bd7-00eeadfe1243\" (UID: \"316646c5-1898-417a-8bd7-00eeadfe1243\") " Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.866610 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/316646c5-1898-417a-8bd7-00eeadfe1243-utilities\") pod \"316646c5-1898-417a-8bd7-00eeadfe1243\" (UID: \"316646c5-1898-417a-8bd7-00eeadfe1243\") " Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.866904 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-dpf6p" event={"ID":"17d1692e-e64c-415e-98c6-fc0e5c799fe0","Type":"ContainerDied","Data":"c1ed7efd3687998dabc1724dada2cb0471f8f9f4ce329e4f622a91d9529a5b30"} Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.866944 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-dpf6p" event={"ID":"17d1692e-e64c-415e-98c6-fc0e5c799fe0","Type":"ContainerDied","Data":"5b1a0b828474bfc01c65e742389b89ec9558f81701ba98898857a82e2cc1733f"} Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.866985 5120 scope.go:117] "RemoveContainer" containerID="c1ed7efd3687998dabc1724dada2cb0471f8f9f4ce329e4f622a91d9529a5b30" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.867436 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b-utilities\") pod \"df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b\" (UID: \"df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b\") " Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.867475 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b-catalog-content\") pod \"df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b\" (UID: \"df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b\") " Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.868648 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/316646c5-1898-417a-8bd7-00eeadfe1243-utilities" (OuterVolumeSpecName: "utilities") pod "316646c5-1898-417a-8bd7-00eeadfe1243" (UID: "316646c5-1898-417a-8bd7-00eeadfe1243"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.872137 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b-kube-api-access-lsv5h" (OuterVolumeSpecName: "kube-api-access-lsv5h") pod "df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b" (UID: "df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b"). InnerVolumeSpecName "kube-api-access-lsv5h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.872550 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/316646c5-1898-417a-8bd7-00eeadfe1243-kube-api-access-kzfgd" (OuterVolumeSpecName: "kube-api-access-kzfgd") pod "316646c5-1898-417a-8bd7-00eeadfe1243" (UID: "316646c5-1898-417a-8bd7-00eeadfe1243"). InnerVolumeSpecName "kube-api-access-kzfgd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.874109 5120 generic.go:358] "Generic (PLEG): container finished" podID="ed489f01-1188-4d6f-9ed4-9618fddf1eab" containerID="36cd9934f20a92aa13326a062a7c371f5422564071ae91c2740e1a07898b4c02" exitCode=0 Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.874207 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2q8d8" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.874215 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2q8d8" event={"ID":"ed489f01-1188-4d6f-9ed4-9618fddf1eab","Type":"ContainerDied","Data":"36cd9934f20a92aa13326a062a7c371f5422564071ae91c2740e1a07898b4c02"} Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.874263 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2q8d8" event={"ID":"ed489f01-1188-4d6f-9ed4-9618fddf1eab","Type":"ContainerDied","Data":"1b3c4ff9732c93011b494f79b9052c81bdd854fe832d0d1aff9714069c08086b"} Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.879656 5120 generic.go:358] "Generic (PLEG): container finished" podID="df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b" containerID="0a55a93788e2f3a3da24ed47901056711624f745dc882f8044ade2936144a4cd" exitCode=0 Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.879811 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t67f7" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.879828 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t67f7" event={"ID":"df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b","Type":"ContainerDied","Data":"0a55a93788e2f3a3da24ed47901056711624f745dc882f8044ade2936144a4cd"} Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.879865 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t67f7" event={"ID":"df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b","Type":"ContainerDied","Data":"ab803e6a4d6bc8f6c5535f7b6ba4ab7280d0c0d527dc407d8f992ddd6ad5d49c"} Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.882136 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b-utilities" (OuterVolumeSpecName: "utilities") pod "df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b" (UID: "df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.882494 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/316646c5-1898-417a-8bd7-00eeadfe1243-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "316646c5-1898-417a-8bd7-00eeadfe1243" (UID: "316646c5-1898-417a-8bd7-00eeadfe1243"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.883733 5120 generic.go:358] "Generic (PLEG): container finished" podID="316646c5-1898-417a-8bd7-00eeadfe1243" containerID="043c30ef82e1600d2b7aee310c29468c886daf6f11ea610b5aafacd7353aca42" exitCode=0 Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.883847 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rp8qf" event={"ID":"316646c5-1898-417a-8bd7-00eeadfe1243","Type":"ContainerDied","Data":"043c30ef82e1600d2b7aee310c29468c886daf6f11ea610b5aafacd7353aca42"} Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.883871 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rp8qf" event={"ID":"316646c5-1898-417a-8bd7-00eeadfe1243","Type":"ContainerDied","Data":"b88cdc87cf3e9924bb751ee1a18fd60cd70c52d60437b53a435f731721d1f00b"} Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.883986 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rp8qf" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.889786 5120 generic.go:358] "Generic (PLEG): container finished" podID="4f669e70-10cd-47da-abc9-84be80cb5cfb" containerID="bb1b2eda9dfc535bf2571cb8ca9c5b1fc9f5f3199ff1d0107b99fac41ee37f68" exitCode=0 Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.889875 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fztfm" event={"ID":"4f669e70-10cd-47da-abc9-84be80cb5cfb","Type":"ContainerDied","Data":"bb1b2eda9dfc535bf2571cb8ca9c5b1fc9f5f3199ff1d0107b99fac41ee37f68"} Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.889912 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fztfm" event={"ID":"4f669e70-10cd-47da-abc9-84be80cb5cfb","Type":"ContainerDied","Data":"942f286364f00775972ff57ef7ee9a1b6d83531d392b957342335e79a3c8a683"} Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.890030 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fztfm" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.900210 5120 scope.go:117] "RemoveContainer" containerID="c741f63c3c18c70fb74a3e1cc4574a0434a01a3203abe1ccedcf63dda5493f22" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.918900 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2q8d8"] Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.929148 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2q8d8"] Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.929424 5120 scope.go:117] "RemoveContainer" containerID="c1ed7efd3687998dabc1724dada2cb0471f8f9f4ce329e4f622a91d9529a5b30" Jan 22 11:54:14 crc kubenswrapper[5120]: E0122 11:54:14.930032 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1ed7efd3687998dabc1724dada2cb0471f8f9f4ce329e4f622a91d9529a5b30\": container with ID starting with c1ed7efd3687998dabc1724dada2cb0471f8f9f4ce329e4f622a91d9529a5b30 not found: ID does not exist" containerID="c1ed7efd3687998dabc1724dada2cb0471f8f9f4ce329e4f622a91d9529a5b30" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.930085 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1ed7efd3687998dabc1724dada2cb0471f8f9f4ce329e4f622a91d9529a5b30"} err="failed to get container status \"c1ed7efd3687998dabc1724dada2cb0471f8f9f4ce329e4f622a91d9529a5b30\": rpc error: code = NotFound desc = could not find container \"c1ed7efd3687998dabc1724dada2cb0471f8f9f4ce329e4f622a91d9529a5b30\": container with ID starting with c1ed7efd3687998dabc1724dada2cb0471f8f9f4ce329e4f622a91d9529a5b30 not found: ID does not exist" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.930115 5120 scope.go:117] "RemoveContainer" containerID="c741f63c3c18c70fb74a3e1cc4574a0434a01a3203abe1ccedcf63dda5493f22" Jan 22 11:54:14 crc kubenswrapper[5120]: E0122 11:54:14.930744 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c741f63c3c18c70fb74a3e1cc4574a0434a01a3203abe1ccedcf63dda5493f22\": container with ID starting with c741f63c3c18c70fb74a3e1cc4574a0434a01a3203abe1ccedcf63dda5493f22 not found: ID does not exist" containerID="c741f63c3c18c70fb74a3e1cc4574a0434a01a3203abe1ccedcf63dda5493f22" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.930800 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c741f63c3c18c70fb74a3e1cc4574a0434a01a3203abe1ccedcf63dda5493f22"} err="failed to get container status \"c741f63c3c18c70fb74a3e1cc4574a0434a01a3203abe1ccedcf63dda5493f22\": rpc error: code = NotFound desc = could not find container \"c741f63c3c18c70fb74a3e1cc4574a0434a01a3203abe1ccedcf63dda5493f22\": container with ID starting with c741f63c3c18c70fb74a3e1cc4574a0434a01a3203abe1ccedcf63dda5493f22 not found: ID does not exist" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.930864 5120 scope.go:117] "RemoveContainer" containerID="36cd9934f20a92aa13326a062a7c371f5422564071ae91c2740e1a07898b4c02" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.938296 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fztfm"] Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.953894 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fztfm"] Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.955670 5120 scope.go:117] "RemoveContainer" containerID="b00909583aa1447b916f95649d778fe12290cadd6b431d88809c3682cc826759" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.960609 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rp8qf"] Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.966000 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rp8qf"] Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.968771 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/17d1692e-e64c-415e-98c6-fc0e5c799fe0-tmp\") pod \"17d1692e-e64c-415e-98c6-fc0e5c799fe0\" (UID: \"17d1692e-e64c-415e-98c6-fc0e5c799fe0\") " Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.969072 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/17d1692e-e64c-415e-98c6-fc0e5c799fe0-marketplace-operator-metrics\") pod \"17d1692e-e64c-415e-98c6-fc0e5c799fe0\" (UID: \"17d1692e-e64c-415e-98c6-fc0e5c799fe0\") " Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.969179 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17d1692e-e64c-415e-98c6-fc0e5c799fe0-tmp" (OuterVolumeSpecName: "tmp") pod "17d1692e-e64c-415e-98c6-fc0e5c799fe0" (UID: "17d1692e-e64c-415e-98c6-fc0e5c799fe0"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.969344 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fdm8\" (UniqueName: \"kubernetes.io/projected/17d1692e-e64c-415e-98c6-fc0e5c799fe0-kube-api-access-2fdm8\") pod \"17d1692e-e64c-415e-98c6-fc0e5c799fe0\" (UID: \"17d1692e-e64c-415e-98c6-fc0e5c799fe0\") " Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.969516 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/17d1692e-e64c-415e-98c6-fc0e5c799fe0-marketplace-trusted-ca\") pod \"17d1692e-e64c-415e-98c6-fc0e5c799fe0\" (UID: \"17d1692e-e64c-415e-98c6-fc0e5c799fe0\") " Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.969814 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lsv5h\" (UniqueName: \"kubernetes.io/projected/df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b-kube-api-access-lsv5h\") on node \"crc\" DevicePath \"\"" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.969837 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/316646c5-1898-417a-8bd7-00eeadfe1243-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.969849 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/316646c5-1898-417a-8bd7-00eeadfe1243-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.969872 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.969885 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kzfgd\" (UniqueName: \"kubernetes.io/projected/316646c5-1898-417a-8bd7-00eeadfe1243-kube-api-access-kzfgd\") on node \"crc\" DevicePath \"\"" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.969901 5120 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/17d1692e-e64c-415e-98c6-fc0e5c799fe0-tmp\") on node \"crc\" DevicePath \"\"" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.970863 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17d1692e-e64c-415e-98c6-fc0e5c799fe0-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "17d1692e-e64c-415e-98c6-fc0e5c799fe0" (UID: "17d1692e-e64c-415e-98c6-fc0e5c799fe0"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.973794 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17d1692e-e64c-415e-98c6-fc0e5c799fe0-kube-api-access-2fdm8" (OuterVolumeSpecName: "kube-api-access-2fdm8") pod "17d1692e-e64c-415e-98c6-fc0e5c799fe0" (UID: "17d1692e-e64c-415e-98c6-fc0e5c799fe0"). InnerVolumeSpecName "kube-api-access-2fdm8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.974446 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17d1692e-e64c-415e-98c6-fc0e5c799fe0-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "17d1692e-e64c-415e-98c6-fc0e5c799fe0" (UID: "17d1692e-e64c-415e-98c6-fc0e5c799fe0"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.983715 5120 scope.go:117] "RemoveContainer" containerID="dc207be41a00ceee7de3c6651059410a76c90a309847c28dd6606649dc8328a3" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.000681 5120 scope.go:117] "RemoveContainer" containerID="36cd9934f20a92aa13326a062a7c371f5422564071ae91c2740e1a07898b4c02" Jan 22 11:54:15 crc kubenswrapper[5120]: E0122 11:54:15.001469 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36cd9934f20a92aa13326a062a7c371f5422564071ae91c2740e1a07898b4c02\": container with ID starting with 36cd9934f20a92aa13326a062a7c371f5422564071ae91c2740e1a07898b4c02 not found: ID does not exist" containerID="36cd9934f20a92aa13326a062a7c371f5422564071ae91c2740e1a07898b4c02" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.001523 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36cd9934f20a92aa13326a062a7c371f5422564071ae91c2740e1a07898b4c02"} err="failed to get container status \"36cd9934f20a92aa13326a062a7c371f5422564071ae91c2740e1a07898b4c02\": rpc error: code = NotFound desc = could not find container \"36cd9934f20a92aa13326a062a7c371f5422564071ae91c2740e1a07898b4c02\": container with ID starting with 36cd9934f20a92aa13326a062a7c371f5422564071ae91c2740e1a07898b4c02 not found: ID does not exist" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.001551 5120 scope.go:117] "RemoveContainer" containerID="b00909583aa1447b916f95649d778fe12290cadd6b431d88809c3682cc826759" Jan 22 11:54:15 crc kubenswrapper[5120]: E0122 11:54:15.002266 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b00909583aa1447b916f95649d778fe12290cadd6b431d88809c3682cc826759\": container with ID starting with b00909583aa1447b916f95649d778fe12290cadd6b431d88809c3682cc826759 not found: ID does not exist" containerID="b00909583aa1447b916f95649d778fe12290cadd6b431d88809c3682cc826759" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.002318 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b00909583aa1447b916f95649d778fe12290cadd6b431d88809c3682cc826759"} err="failed to get container status \"b00909583aa1447b916f95649d778fe12290cadd6b431d88809c3682cc826759\": rpc error: code = NotFound desc = could not find container \"b00909583aa1447b916f95649d778fe12290cadd6b431d88809c3682cc826759\": container with ID starting with b00909583aa1447b916f95649d778fe12290cadd6b431d88809c3682cc826759 not found: ID does not exist" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.002351 5120 scope.go:117] "RemoveContainer" containerID="dc207be41a00ceee7de3c6651059410a76c90a309847c28dd6606649dc8328a3" Jan 22 11:54:15 crc kubenswrapper[5120]: E0122 11:54:15.004266 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc207be41a00ceee7de3c6651059410a76c90a309847c28dd6606649dc8328a3\": container with ID starting with dc207be41a00ceee7de3c6651059410a76c90a309847c28dd6606649dc8328a3 not found: ID does not exist" containerID="dc207be41a00ceee7de3c6651059410a76c90a309847c28dd6606649dc8328a3" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.004340 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc207be41a00ceee7de3c6651059410a76c90a309847c28dd6606649dc8328a3"} err="failed to get container status \"dc207be41a00ceee7de3c6651059410a76c90a309847c28dd6606649dc8328a3\": rpc error: code = NotFound desc = could not find container \"dc207be41a00ceee7de3c6651059410a76c90a309847c28dd6606649dc8328a3\": container with ID starting with dc207be41a00ceee7de3c6651059410a76c90a309847c28dd6606649dc8328a3 not found: ID does not exist" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.004422 5120 scope.go:117] "RemoveContainer" containerID="0a55a93788e2f3a3da24ed47901056711624f745dc882f8044ade2936144a4cd" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.027149 5120 scope.go:117] "RemoveContainer" containerID="6323b4a422b08b7fef939c6ed6bea5dc74a608973ed5a0ca42c7b5bd1a193d40" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.051994 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b" (UID: "df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.053131 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-nzw8g"] Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.063481 5120 scope.go:117] "RemoveContainer" containerID="985bb517b1a5ceb43a9211611e90da3a2637d7edc83728d91f5fb480e9687668" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.071602 5120 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/17d1692e-e64c-415e-98c6-fc0e5c799fe0-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.071641 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2fdm8\" (UniqueName: \"kubernetes.io/projected/17d1692e-e64c-415e-98c6-fc0e5c799fe0-kube-api-access-2fdm8\") on node \"crc\" DevicePath \"\"" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.071656 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.071671 5120 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/17d1692e-e64c-415e-98c6-fc0e5c799fe0-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.103166 5120 scope.go:117] "RemoveContainer" containerID="0a55a93788e2f3a3da24ed47901056711624f745dc882f8044ade2936144a4cd" Jan 22 11:54:15 crc kubenswrapper[5120]: E0122 11:54:15.103934 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a55a93788e2f3a3da24ed47901056711624f745dc882f8044ade2936144a4cd\": container with ID starting with 0a55a93788e2f3a3da24ed47901056711624f745dc882f8044ade2936144a4cd not found: ID does not exist" containerID="0a55a93788e2f3a3da24ed47901056711624f745dc882f8044ade2936144a4cd" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.104031 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a55a93788e2f3a3da24ed47901056711624f745dc882f8044ade2936144a4cd"} err="failed to get container status \"0a55a93788e2f3a3da24ed47901056711624f745dc882f8044ade2936144a4cd\": rpc error: code = NotFound desc = could not find container \"0a55a93788e2f3a3da24ed47901056711624f745dc882f8044ade2936144a4cd\": container with ID starting with 0a55a93788e2f3a3da24ed47901056711624f745dc882f8044ade2936144a4cd not found: ID does not exist" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.104072 5120 scope.go:117] "RemoveContainer" containerID="6323b4a422b08b7fef939c6ed6bea5dc74a608973ed5a0ca42c7b5bd1a193d40" Jan 22 11:54:15 crc kubenswrapper[5120]: E0122 11:54:15.104596 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6323b4a422b08b7fef939c6ed6bea5dc74a608973ed5a0ca42c7b5bd1a193d40\": container with ID starting with 6323b4a422b08b7fef939c6ed6bea5dc74a608973ed5a0ca42c7b5bd1a193d40 not found: ID does not exist" containerID="6323b4a422b08b7fef939c6ed6bea5dc74a608973ed5a0ca42c7b5bd1a193d40" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.104701 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6323b4a422b08b7fef939c6ed6bea5dc74a608973ed5a0ca42c7b5bd1a193d40"} err="failed to get container status \"6323b4a422b08b7fef939c6ed6bea5dc74a608973ed5a0ca42c7b5bd1a193d40\": rpc error: code = NotFound desc = could not find container \"6323b4a422b08b7fef939c6ed6bea5dc74a608973ed5a0ca42c7b5bd1a193d40\": container with ID starting with 6323b4a422b08b7fef939c6ed6bea5dc74a608973ed5a0ca42c7b5bd1a193d40 not found: ID does not exist" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.104794 5120 scope.go:117] "RemoveContainer" containerID="985bb517b1a5ceb43a9211611e90da3a2637d7edc83728d91f5fb480e9687668" Jan 22 11:54:15 crc kubenswrapper[5120]: E0122 11:54:15.105135 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"985bb517b1a5ceb43a9211611e90da3a2637d7edc83728d91f5fb480e9687668\": container with ID starting with 985bb517b1a5ceb43a9211611e90da3a2637d7edc83728d91f5fb480e9687668 not found: ID does not exist" containerID="985bb517b1a5ceb43a9211611e90da3a2637d7edc83728d91f5fb480e9687668" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.105174 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"985bb517b1a5ceb43a9211611e90da3a2637d7edc83728d91f5fb480e9687668"} err="failed to get container status \"985bb517b1a5ceb43a9211611e90da3a2637d7edc83728d91f5fb480e9687668\": rpc error: code = NotFound desc = could not find container \"985bb517b1a5ceb43a9211611e90da3a2637d7edc83728d91f5fb480e9687668\": container with ID starting with 985bb517b1a5ceb43a9211611e90da3a2637d7edc83728d91f5fb480e9687668 not found: ID does not exist" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.105188 5120 scope.go:117] "RemoveContainer" containerID="043c30ef82e1600d2b7aee310c29468c886daf6f11ea610b5aafacd7353aca42" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.123663 5120 scope.go:117] "RemoveContainer" containerID="78a91413b2d3e4e902040629ec2a3493284930cedd944b03f3abad707da16bcb" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.180067 5120 scope.go:117] "RemoveContainer" containerID="c5a54dd8cce3cf9390074acb6e0b4e6f5774c6d5a39aade6bcee188cb33a4152" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.218829 5120 scope.go:117] "RemoveContainer" containerID="043c30ef82e1600d2b7aee310c29468c886daf6f11ea610b5aafacd7353aca42" Jan 22 11:54:15 crc kubenswrapper[5120]: E0122 11:54:15.219417 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"043c30ef82e1600d2b7aee310c29468c886daf6f11ea610b5aafacd7353aca42\": container with ID starting with 043c30ef82e1600d2b7aee310c29468c886daf6f11ea610b5aafacd7353aca42 not found: ID does not exist" containerID="043c30ef82e1600d2b7aee310c29468c886daf6f11ea610b5aafacd7353aca42" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.219457 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"043c30ef82e1600d2b7aee310c29468c886daf6f11ea610b5aafacd7353aca42"} err="failed to get container status \"043c30ef82e1600d2b7aee310c29468c886daf6f11ea610b5aafacd7353aca42\": rpc error: code = NotFound desc = could not find container \"043c30ef82e1600d2b7aee310c29468c886daf6f11ea610b5aafacd7353aca42\": container with ID starting with 043c30ef82e1600d2b7aee310c29468c886daf6f11ea610b5aafacd7353aca42 not found: ID does not exist" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.219486 5120 scope.go:117] "RemoveContainer" containerID="78a91413b2d3e4e902040629ec2a3493284930cedd944b03f3abad707da16bcb" Jan 22 11:54:15 crc kubenswrapper[5120]: E0122 11:54:15.219764 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78a91413b2d3e4e902040629ec2a3493284930cedd944b03f3abad707da16bcb\": container with ID starting with 78a91413b2d3e4e902040629ec2a3493284930cedd944b03f3abad707da16bcb not found: ID does not exist" containerID="78a91413b2d3e4e902040629ec2a3493284930cedd944b03f3abad707da16bcb" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.219792 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78a91413b2d3e4e902040629ec2a3493284930cedd944b03f3abad707da16bcb"} err="failed to get container status \"78a91413b2d3e4e902040629ec2a3493284930cedd944b03f3abad707da16bcb\": rpc error: code = NotFound desc = could not find container \"78a91413b2d3e4e902040629ec2a3493284930cedd944b03f3abad707da16bcb\": container with ID starting with 78a91413b2d3e4e902040629ec2a3493284930cedd944b03f3abad707da16bcb not found: ID does not exist" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.219811 5120 scope.go:117] "RemoveContainer" containerID="c5a54dd8cce3cf9390074acb6e0b4e6f5774c6d5a39aade6bcee188cb33a4152" Jan 22 11:54:15 crc kubenswrapper[5120]: E0122 11:54:15.220343 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5a54dd8cce3cf9390074acb6e0b4e6f5774c6d5a39aade6bcee188cb33a4152\": container with ID starting with c5a54dd8cce3cf9390074acb6e0b4e6f5774c6d5a39aade6bcee188cb33a4152 not found: ID does not exist" containerID="c5a54dd8cce3cf9390074acb6e0b4e6f5774c6d5a39aade6bcee188cb33a4152" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.220448 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5a54dd8cce3cf9390074acb6e0b4e6f5774c6d5a39aade6bcee188cb33a4152"} err="failed to get container status \"c5a54dd8cce3cf9390074acb6e0b4e6f5774c6d5a39aade6bcee188cb33a4152\": rpc error: code = NotFound desc = could not find container \"c5a54dd8cce3cf9390074acb6e0b4e6f5774c6d5a39aade6bcee188cb33a4152\": container with ID starting with c5a54dd8cce3cf9390074acb6e0b4e6f5774c6d5a39aade6bcee188cb33a4152 not found: ID does not exist" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.220523 5120 scope.go:117] "RemoveContainer" containerID="bb1b2eda9dfc535bf2571cb8ca9c5b1fc9f5f3199ff1d0107b99fac41ee37f68" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.252899 5120 scope.go:117] "RemoveContainer" containerID="30e86473793b92399bf3776be18ddc5b871c9f007c3f96eb1763cfef12eaf5fe" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.265574 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-dpf6p"] Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.276823 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-dpf6p"] Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.280411 5120 scope.go:117] "RemoveContainer" containerID="0a6ff4df62b5c4da4557f4c5e8baed180b5153d309f28b63bc73b55557f599b5" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.284487 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t67f7"] Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.290858 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-t67f7"] Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.302471 5120 scope.go:117] "RemoveContainer" containerID="bb1b2eda9dfc535bf2571cb8ca9c5b1fc9f5f3199ff1d0107b99fac41ee37f68" Jan 22 11:54:15 crc kubenswrapper[5120]: E0122 11:54:15.303083 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb1b2eda9dfc535bf2571cb8ca9c5b1fc9f5f3199ff1d0107b99fac41ee37f68\": container with ID starting with bb1b2eda9dfc535bf2571cb8ca9c5b1fc9f5f3199ff1d0107b99fac41ee37f68 not found: ID does not exist" containerID="bb1b2eda9dfc535bf2571cb8ca9c5b1fc9f5f3199ff1d0107b99fac41ee37f68" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.303135 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb1b2eda9dfc535bf2571cb8ca9c5b1fc9f5f3199ff1d0107b99fac41ee37f68"} err="failed to get container status \"bb1b2eda9dfc535bf2571cb8ca9c5b1fc9f5f3199ff1d0107b99fac41ee37f68\": rpc error: code = NotFound desc = could not find container \"bb1b2eda9dfc535bf2571cb8ca9c5b1fc9f5f3199ff1d0107b99fac41ee37f68\": container with ID starting with bb1b2eda9dfc535bf2571cb8ca9c5b1fc9f5f3199ff1d0107b99fac41ee37f68 not found: ID does not exist" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.303166 5120 scope.go:117] "RemoveContainer" containerID="30e86473793b92399bf3776be18ddc5b871c9f007c3f96eb1763cfef12eaf5fe" Jan 22 11:54:15 crc kubenswrapper[5120]: E0122 11:54:15.303769 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30e86473793b92399bf3776be18ddc5b871c9f007c3f96eb1763cfef12eaf5fe\": container with ID starting with 30e86473793b92399bf3776be18ddc5b871c9f007c3f96eb1763cfef12eaf5fe not found: ID does not exist" containerID="30e86473793b92399bf3776be18ddc5b871c9f007c3f96eb1763cfef12eaf5fe" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.303815 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30e86473793b92399bf3776be18ddc5b871c9f007c3f96eb1763cfef12eaf5fe"} err="failed to get container status \"30e86473793b92399bf3776be18ddc5b871c9f007c3f96eb1763cfef12eaf5fe\": rpc error: code = NotFound desc = could not find container \"30e86473793b92399bf3776be18ddc5b871c9f007c3f96eb1763cfef12eaf5fe\": container with ID starting with 30e86473793b92399bf3776be18ddc5b871c9f007c3f96eb1763cfef12eaf5fe not found: ID does not exist" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.303844 5120 scope.go:117] "RemoveContainer" containerID="0a6ff4df62b5c4da4557f4c5e8baed180b5153d309f28b63bc73b55557f599b5" Jan 22 11:54:15 crc kubenswrapper[5120]: E0122 11:54:15.304315 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a6ff4df62b5c4da4557f4c5e8baed180b5153d309f28b63bc73b55557f599b5\": container with ID starting with 0a6ff4df62b5c4da4557f4c5e8baed180b5153d309f28b63bc73b55557f599b5 not found: ID does not exist" containerID="0a6ff4df62b5c4da4557f4c5e8baed180b5153d309f28b63bc73b55557f599b5" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.304342 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a6ff4df62b5c4da4557f4c5e8baed180b5153d309f28b63bc73b55557f599b5"} err="failed to get container status \"0a6ff4df62b5c4da4557f4c5e8baed180b5153d309f28b63bc73b55557f599b5\": rpc error: code = NotFound desc = could not find container \"0a6ff4df62b5c4da4557f4c5e8baed180b5153d309f28b63bc73b55557f599b5\": container with ID starting with 0a6ff4df62b5c4da4557f4c5e8baed180b5153d309f28b63bc73b55557f599b5 not found: ID does not exist" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.579350 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17d1692e-e64c-415e-98c6-fc0e5c799fe0" path="/var/lib/kubelet/pods/17d1692e-e64c-415e-98c6-fc0e5c799fe0/volumes" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.580751 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="316646c5-1898-417a-8bd7-00eeadfe1243" path="/var/lib/kubelet/pods/316646c5-1898-417a-8bd7-00eeadfe1243/volumes" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.581501 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f669e70-10cd-47da-abc9-84be80cb5cfb" path="/var/lib/kubelet/pods/4f669e70-10cd-47da-abc9-84be80cb5cfb/volumes" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.582781 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b" path="/var/lib/kubelet/pods/df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b/volumes" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.583578 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed489f01-1188-4d6f-9ed4-9618fddf1eab" path="/var/lib/kubelet/pods/ed489f01-1188-4d6f-9ed4-9618fddf1eab/volumes" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.790332 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pn4sg"] Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.791116 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4f669e70-10cd-47da-abc9-84be80cb5cfb" containerName="extract-utilities" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.791137 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f669e70-10cd-47da-abc9-84be80cb5cfb" containerName="extract-utilities" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.791152 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="316646c5-1898-417a-8bd7-00eeadfe1243" containerName="extract-content" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.791158 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="316646c5-1898-417a-8bd7-00eeadfe1243" containerName="extract-content" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.791170 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b" containerName="extract-content" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.791177 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b" containerName="extract-content" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.791188 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="17d1692e-e64c-415e-98c6-fc0e5c799fe0" containerName="marketplace-operator" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.791194 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="17d1692e-e64c-415e-98c6-fc0e5c799fe0" containerName="marketplace-operator" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.791202 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4f669e70-10cd-47da-abc9-84be80cb5cfb" containerName="registry-server" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.791207 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f669e70-10cd-47da-abc9-84be80cb5cfb" containerName="registry-server" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.791217 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ed489f01-1188-4d6f-9ed4-9618fddf1eab" containerName="registry-server" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.791222 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed489f01-1188-4d6f-9ed4-9618fddf1eab" containerName="registry-server" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.791230 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="316646c5-1898-417a-8bd7-00eeadfe1243" containerName="extract-utilities" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.791236 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="316646c5-1898-417a-8bd7-00eeadfe1243" containerName="extract-utilities" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.791246 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b" containerName="extract-utilities" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.791251 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b" containerName="extract-utilities" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.791260 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4f669e70-10cd-47da-abc9-84be80cb5cfb" containerName="extract-content" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.791266 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f669e70-10cd-47da-abc9-84be80cb5cfb" containerName="extract-content" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.791273 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ed489f01-1188-4d6f-9ed4-9618fddf1eab" containerName="extract-utilities" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.791278 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed489f01-1188-4d6f-9ed4-9618fddf1eab" containerName="extract-utilities" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.791289 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b" containerName="registry-server" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.791294 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b" containerName="registry-server" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.791301 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ed489f01-1188-4d6f-9ed4-9618fddf1eab" containerName="extract-content" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.791306 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed489f01-1188-4d6f-9ed4-9618fddf1eab" containerName="extract-content" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.791313 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="316646c5-1898-417a-8bd7-00eeadfe1243" containerName="registry-server" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.791318 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="316646c5-1898-417a-8bd7-00eeadfe1243" containerName="registry-server" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.791403 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="17d1692e-e64c-415e-98c6-fc0e5c799fe0" containerName="marketplace-operator" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.791415 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="316646c5-1898-417a-8bd7-00eeadfe1243" containerName="registry-server" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.791422 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="ed489f01-1188-4d6f-9ed4-9618fddf1eab" containerName="registry-server" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.791432 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="17d1692e-e64c-415e-98c6-fc0e5c799fe0" containerName="marketplace-operator" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.791440 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="4f669e70-10cd-47da-abc9-84be80cb5cfb" containerName="registry-server" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.791448 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b" containerName="registry-server" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.791578 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="17d1692e-e64c-415e-98c6-fc0e5c799fe0" containerName="marketplace-operator" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.791589 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="17d1692e-e64c-415e-98c6-fc0e5c799fe0" containerName="marketplace-operator" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.820045 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pn4sg"] Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.820221 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pn4sg" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.824042 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.883331 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfmsb\" (UniqueName: \"kubernetes.io/projected/db99c964-abd0-4bc6-a71a-79a9c5a3c718-kube-api-access-qfmsb\") pod \"redhat-marketplace-pn4sg\" (UID: \"db99c964-abd0-4bc6-a71a-79a9c5a3c718\") " pod="openshift-marketplace/redhat-marketplace-pn4sg" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.883390 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db99c964-abd0-4bc6-a71a-79a9c5a3c718-utilities\") pod \"redhat-marketplace-pn4sg\" (UID: \"db99c964-abd0-4bc6-a71a-79a9c5a3c718\") " pod="openshift-marketplace/redhat-marketplace-pn4sg" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.883497 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db99c964-abd0-4bc6-a71a-79a9c5a3c718-catalog-content\") pod \"redhat-marketplace-pn4sg\" (UID: \"db99c964-abd0-4bc6-a71a-79a9c5a3c718\") " pod="openshift-marketplace/redhat-marketplace-pn4sg" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.898132 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-nzw8g" event={"ID":"abdba773-b95f-4d73-bcb5-d36526f8e13d","Type":"ContainerStarted","Data":"fe540687eaae41d502a010521179ea9124a176308149bad985af24b6c88b8648"} Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.898193 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-nzw8g" event={"ID":"abdba773-b95f-4d73-bcb5-d36526f8e13d","Type":"ContainerStarted","Data":"ffaa94d3418ec37b8f0d5b883651fdf2ef991cfafc402247440a3b167ae4e76b"} Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.898395 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-nzw8g" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.904554 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-nzw8g" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.931267 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-nzw8g" podStartSLOduration=1.931235807 podStartE2EDuration="1.931235807s" podCreationTimestamp="2026-01-22 11:54:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:54:15.922534676 +0000 UTC m=+390.666483027" watchObservedRunningTime="2026-01-22 11:54:15.931235807 +0000 UTC m=+390.675184148" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.984270 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qfmsb\" (UniqueName: \"kubernetes.io/projected/db99c964-abd0-4bc6-a71a-79a9c5a3c718-kube-api-access-qfmsb\") pod \"redhat-marketplace-pn4sg\" (UID: \"db99c964-abd0-4bc6-a71a-79a9c5a3c718\") " pod="openshift-marketplace/redhat-marketplace-pn4sg" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.984329 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db99c964-abd0-4bc6-a71a-79a9c5a3c718-utilities\") pod \"redhat-marketplace-pn4sg\" (UID: \"db99c964-abd0-4bc6-a71a-79a9c5a3c718\") " pod="openshift-marketplace/redhat-marketplace-pn4sg" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.984649 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db99c964-abd0-4bc6-a71a-79a9c5a3c718-catalog-content\") pod \"redhat-marketplace-pn4sg\" (UID: \"db99c964-abd0-4bc6-a71a-79a9c5a3c718\") " pod="openshift-marketplace/redhat-marketplace-pn4sg" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.984931 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db99c964-abd0-4bc6-a71a-79a9c5a3c718-utilities\") pod \"redhat-marketplace-pn4sg\" (UID: \"db99c964-abd0-4bc6-a71a-79a9c5a3c718\") " pod="openshift-marketplace/redhat-marketplace-pn4sg" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.985135 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db99c964-abd0-4bc6-a71a-79a9c5a3c718-catalog-content\") pod \"redhat-marketplace-pn4sg\" (UID: \"db99c964-abd0-4bc6-a71a-79a9c5a3c718\") " pod="openshift-marketplace/redhat-marketplace-pn4sg" Jan 22 11:54:16 crc kubenswrapper[5120]: I0122 11:54:16.004198 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfmsb\" (UniqueName: \"kubernetes.io/projected/db99c964-abd0-4bc6-a71a-79a9c5a3c718-kube-api-access-qfmsb\") pod \"redhat-marketplace-pn4sg\" (UID: \"db99c964-abd0-4bc6-a71a-79a9c5a3c718\") " pod="openshift-marketplace/redhat-marketplace-pn4sg" Jan 22 11:54:16 crc kubenswrapper[5120]: I0122 11:54:16.141625 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pn4sg" Jan 22 11:54:16 crc kubenswrapper[5120]: I0122 11:54:16.605046 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pn4sg"] Jan 22 11:54:16 crc kubenswrapper[5120]: I0122 11:54:16.795084 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-srj7k"] Jan 22 11:54:16 crc kubenswrapper[5120]: I0122 11:54:16.835630 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-srj7k"] Jan 22 11:54:16 crc kubenswrapper[5120]: I0122 11:54:16.835820 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-srj7k" Jan 22 11:54:16 crc kubenswrapper[5120]: I0122 11:54:16.840113 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Jan 22 11:54:16 crc kubenswrapper[5120]: I0122 11:54:16.903577 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzptw\" (UniqueName: \"kubernetes.io/projected/65ded1b5-0551-47c3-b32f-646318c3055a-kube-api-access-qzptw\") pod \"redhat-operators-srj7k\" (UID: \"65ded1b5-0551-47c3-b32f-646318c3055a\") " pod="openshift-marketplace/redhat-operators-srj7k" Jan 22 11:54:16 crc kubenswrapper[5120]: I0122 11:54:16.903802 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65ded1b5-0551-47c3-b32f-646318c3055a-catalog-content\") pod \"redhat-operators-srj7k\" (UID: \"65ded1b5-0551-47c3-b32f-646318c3055a\") " pod="openshift-marketplace/redhat-operators-srj7k" Jan 22 11:54:16 crc kubenswrapper[5120]: I0122 11:54:16.904087 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65ded1b5-0551-47c3-b32f-646318c3055a-utilities\") pod \"redhat-operators-srj7k\" (UID: \"65ded1b5-0551-47c3-b32f-646318c3055a\") " pod="openshift-marketplace/redhat-operators-srj7k" Jan 22 11:54:16 crc kubenswrapper[5120]: I0122 11:54:16.916940 5120 generic.go:358] "Generic (PLEG): container finished" podID="db99c964-abd0-4bc6-a71a-79a9c5a3c718" containerID="23305ca08eff0d7027d5b25fdf18268d3a1bc74ff0ad9a6abad880b0f080c4ea" exitCode=0 Jan 22 11:54:16 crc kubenswrapper[5120]: I0122 11:54:16.917099 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pn4sg" event={"ID":"db99c964-abd0-4bc6-a71a-79a9c5a3c718","Type":"ContainerDied","Data":"23305ca08eff0d7027d5b25fdf18268d3a1bc74ff0ad9a6abad880b0f080c4ea"} Jan 22 11:54:16 crc kubenswrapper[5120]: I0122 11:54:16.917206 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pn4sg" event={"ID":"db99c964-abd0-4bc6-a71a-79a9c5a3c718","Type":"ContainerStarted","Data":"be77ef2cfeb1733dbed252c7c38f2239d4e5745805f1f6b72bcb11727aa3ba6e"} Jan 22 11:54:17 crc kubenswrapper[5120]: I0122 11:54:17.005283 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65ded1b5-0551-47c3-b32f-646318c3055a-catalog-content\") pod \"redhat-operators-srj7k\" (UID: \"65ded1b5-0551-47c3-b32f-646318c3055a\") " pod="openshift-marketplace/redhat-operators-srj7k" Jan 22 11:54:17 crc kubenswrapper[5120]: I0122 11:54:17.005361 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65ded1b5-0551-47c3-b32f-646318c3055a-utilities\") pod \"redhat-operators-srj7k\" (UID: \"65ded1b5-0551-47c3-b32f-646318c3055a\") " pod="openshift-marketplace/redhat-operators-srj7k" Jan 22 11:54:17 crc kubenswrapper[5120]: I0122 11:54:17.005930 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qzptw\" (UniqueName: \"kubernetes.io/projected/65ded1b5-0551-47c3-b32f-646318c3055a-kube-api-access-qzptw\") pod \"redhat-operators-srj7k\" (UID: \"65ded1b5-0551-47c3-b32f-646318c3055a\") " pod="openshift-marketplace/redhat-operators-srj7k" Jan 22 11:54:17 crc kubenswrapper[5120]: I0122 11:54:17.006522 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65ded1b5-0551-47c3-b32f-646318c3055a-catalog-content\") pod \"redhat-operators-srj7k\" (UID: \"65ded1b5-0551-47c3-b32f-646318c3055a\") " pod="openshift-marketplace/redhat-operators-srj7k" Jan 22 11:54:17 crc kubenswrapper[5120]: I0122 11:54:17.006587 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65ded1b5-0551-47c3-b32f-646318c3055a-utilities\") pod \"redhat-operators-srj7k\" (UID: \"65ded1b5-0551-47c3-b32f-646318c3055a\") " pod="openshift-marketplace/redhat-operators-srj7k" Jan 22 11:54:17 crc kubenswrapper[5120]: I0122 11:54:17.027770 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzptw\" (UniqueName: \"kubernetes.io/projected/65ded1b5-0551-47c3-b32f-646318c3055a-kube-api-access-qzptw\") pod \"redhat-operators-srj7k\" (UID: \"65ded1b5-0551-47c3-b32f-646318c3055a\") " pod="openshift-marketplace/redhat-operators-srj7k" Jan 22 11:54:17 crc kubenswrapper[5120]: I0122 11:54:17.173525 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-srj7k" Jan 22 11:54:17 crc kubenswrapper[5120]: I0122 11:54:17.610052 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-srj7k"] Jan 22 11:54:17 crc kubenswrapper[5120]: I0122 11:54:17.925411 5120 generic.go:358] "Generic (PLEG): container finished" podID="db99c964-abd0-4bc6-a71a-79a9c5a3c718" containerID="313d44d3fc66f67b7d63b858b58681ab05c602e2795d9b9acc7c77eaa45c2996" exitCode=0 Jan 22 11:54:17 crc kubenswrapper[5120]: I0122 11:54:17.925571 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pn4sg" event={"ID":"db99c964-abd0-4bc6-a71a-79a9c5a3c718","Type":"ContainerDied","Data":"313d44d3fc66f67b7d63b858b58681ab05c602e2795d9b9acc7c77eaa45c2996"} Jan 22 11:54:17 crc kubenswrapper[5120]: I0122 11:54:17.928627 5120 generic.go:358] "Generic (PLEG): container finished" podID="65ded1b5-0551-47c3-b32f-646318c3055a" containerID="7ee90baec01e23d823fc00f77c1c09aea16cd2dea6abd1149b9f9a903c101f33" exitCode=0 Jan 22 11:54:17 crc kubenswrapper[5120]: I0122 11:54:17.929869 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-srj7k" event={"ID":"65ded1b5-0551-47c3-b32f-646318c3055a","Type":"ContainerDied","Data":"7ee90baec01e23d823fc00f77c1c09aea16cd2dea6abd1149b9f9a903c101f33"} Jan 22 11:54:17 crc kubenswrapper[5120]: I0122 11:54:17.930093 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-srj7k" event={"ID":"65ded1b5-0551-47c3-b32f-646318c3055a","Type":"ContainerStarted","Data":"75f40da878e27c27b7b3d51f7df08d6516291f4ce894aa192378c535afb294eb"} Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.194279 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7xvj9"] Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.203566 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7xvj9" Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.206408 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.219436 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7xvj9"] Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.325592 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90af06b6-8b8b-48f3-bfb2-541ef60610fa-utilities\") pod \"certified-operators-7xvj9\" (UID: \"90af06b6-8b8b-48f3-bfb2-541ef60610fa\") " pod="openshift-marketplace/certified-operators-7xvj9" Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.325672 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69wt6\" (UniqueName: \"kubernetes.io/projected/90af06b6-8b8b-48f3-bfb2-541ef60610fa-kube-api-access-69wt6\") pod \"certified-operators-7xvj9\" (UID: \"90af06b6-8b8b-48f3-bfb2-541ef60610fa\") " pod="openshift-marketplace/certified-operators-7xvj9" Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.325801 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90af06b6-8b8b-48f3-bfb2-541ef60610fa-catalog-content\") pod \"certified-operators-7xvj9\" (UID: \"90af06b6-8b8b-48f3-bfb2-541ef60610fa\") " pod="openshift-marketplace/certified-operators-7xvj9" Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.430683 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90af06b6-8b8b-48f3-bfb2-541ef60610fa-utilities\") pod \"certified-operators-7xvj9\" (UID: \"90af06b6-8b8b-48f3-bfb2-541ef60610fa\") " pod="openshift-marketplace/certified-operators-7xvj9" Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.430786 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-69wt6\" (UniqueName: \"kubernetes.io/projected/90af06b6-8b8b-48f3-bfb2-541ef60610fa-kube-api-access-69wt6\") pod \"certified-operators-7xvj9\" (UID: \"90af06b6-8b8b-48f3-bfb2-541ef60610fa\") " pod="openshift-marketplace/certified-operators-7xvj9" Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.430892 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90af06b6-8b8b-48f3-bfb2-541ef60610fa-catalog-content\") pod \"certified-operators-7xvj9\" (UID: \"90af06b6-8b8b-48f3-bfb2-541ef60610fa\") " pod="openshift-marketplace/certified-operators-7xvj9" Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.431619 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90af06b6-8b8b-48f3-bfb2-541ef60610fa-catalog-content\") pod \"certified-operators-7xvj9\" (UID: \"90af06b6-8b8b-48f3-bfb2-541ef60610fa\") " pod="openshift-marketplace/certified-operators-7xvj9" Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.431727 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90af06b6-8b8b-48f3-bfb2-541ef60610fa-utilities\") pod \"certified-operators-7xvj9\" (UID: \"90af06b6-8b8b-48f3-bfb2-541ef60610fa\") " pod="openshift-marketplace/certified-operators-7xvj9" Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.458243 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-69wt6\" (UniqueName: \"kubernetes.io/projected/90af06b6-8b8b-48f3-bfb2-541ef60610fa-kube-api-access-69wt6\") pod \"certified-operators-7xvj9\" (UID: \"90af06b6-8b8b-48f3-bfb2-541ef60610fa\") " pod="openshift-marketplace/certified-operators-7xvj9" Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.534176 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7xvj9" Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.641114 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-bs6c2"] Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.648924 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-bs6c2" Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.659625 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-bs6c2"] Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.735632 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-bs6c2\" (UID: \"762bc2c2-d5b7-4508-840f-e8043b9e8729\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-bs6c2" Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.736116 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/762bc2c2-d5b7-4508-840f-e8043b9e8729-trusted-ca\") pod \"image-registry-5d9d95bf5b-bs6c2\" (UID: \"762bc2c2-d5b7-4508-840f-e8043b9e8729\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-bs6c2" Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.736151 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/762bc2c2-d5b7-4508-840f-e8043b9e8729-registry-tls\") pod \"image-registry-5d9d95bf5b-bs6c2\" (UID: \"762bc2c2-d5b7-4508-840f-e8043b9e8729\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-bs6c2" Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.736276 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7jmh\" (UniqueName: \"kubernetes.io/projected/762bc2c2-d5b7-4508-840f-e8043b9e8729-kube-api-access-d7jmh\") pod \"image-registry-5d9d95bf5b-bs6c2\" (UID: \"762bc2c2-d5b7-4508-840f-e8043b9e8729\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-bs6c2" Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.736398 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/762bc2c2-d5b7-4508-840f-e8043b9e8729-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-bs6c2\" (UID: \"762bc2c2-d5b7-4508-840f-e8043b9e8729\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-bs6c2" Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.736497 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/762bc2c2-d5b7-4508-840f-e8043b9e8729-bound-sa-token\") pod \"image-registry-5d9d95bf5b-bs6c2\" (UID: \"762bc2c2-d5b7-4508-840f-e8043b9e8729\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-bs6c2" Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.736547 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/762bc2c2-d5b7-4508-840f-e8043b9e8729-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-bs6c2\" (UID: \"762bc2c2-d5b7-4508-840f-e8043b9e8729\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-bs6c2" Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.736618 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/762bc2c2-d5b7-4508-840f-e8043b9e8729-registry-certificates\") pod \"image-registry-5d9d95bf5b-bs6c2\" (UID: \"762bc2c2-d5b7-4508-840f-e8043b9e8729\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-bs6c2" Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.788812 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-bs6c2\" (UID: \"762bc2c2-d5b7-4508-840f-e8043b9e8729\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-bs6c2" Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.838314 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/762bc2c2-d5b7-4508-840f-e8043b9e8729-registry-certificates\") pod \"image-registry-5d9d95bf5b-bs6c2\" (UID: \"762bc2c2-d5b7-4508-840f-e8043b9e8729\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-bs6c2" Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.838395 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/762bc2c2-d5b7-4508-840f-e8043b9e8729-trusted-ca\") pod \"image-registry-5d9d95bf5b-bs6c2\" (UID: \"762bc2c2-d5b7-4508-840f-e8043b9e8729\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-bs6c2" Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.838430 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/762bc2c2-d5b7-4508-840f-e8043b9e8729-registry-tls\") pod \"image-registry-5d9d95bf5b-bs6c2\" (UID: \"762bc2c2-d5b7-4508-840f-e8043b9e8729\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-bs6c2" Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.838448 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d7jmh\" (UniqueName: \"kubernetes.io/projected/762bc2c2-d5b7-4508-840f-e8043b9e8729-kube-api-access-d7jmh\") pod \"image-registry-5d9d95bf5b-bs6c2\" (UID: \"762bc2c2-d5b7-4508-840f-e8043b9e8729\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-bs6c2" Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.840815 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/762bc2c2-d5b7-4508-840f-e8043b9e8729-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-bs6c2\" (UID: \"762bc2c2-d5b7-4508-840f-e8043b9e8729\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-bs6c2" Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.840947 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/762bc2c2-d5b7-4508-840f-e8043b9e8729-bound-sa-token\") pod \"image-registry-5d9d95bf5b-bs6c2\" (UID: \"762bc2c2-d5b7-4508-840f-e8043b9e8729\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-bs6c2" Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.841046 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/762bc2c2-d5b7-4508-840f-e8043b9e8729-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-bs6c2\" (UID: \"762bc2c2-d5b7-4508-840f-e8043b9e8729\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-bs6c2" Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.841613 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/762bc2c2-d5b7-4508-840f-e8043b9e8729-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-bs6c2\" (UID: \"762bc2c2-d5b7-4508-840f-e8043b9e8729\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-bs6c2" Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.842075 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/762bc2c2-d5b7-4508-840f-e8043b9e8729-registry-certificates\") pod \"image-registry-5d9d95bf5b-bs6c2\" (UID: \"762bc2c2-d5b7-4508-840f-e8043b9e8729\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-bs6c2" Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.847055 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/762bc2c2-d5b7-4508-840f-e8043b9e8729-registry-tls\") pod \"image-registry-5d9d95bf5b-bs6c2\" (UID: \"762bc2c2-d5b7-4508-840f-e8043b9e8729\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-bs6c2" Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.847755 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/762bc2c2-d5b7-4508-840f-e8043b9e8729-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-bs6c2\" (UID: \"762bc2c2-d5b7-4508-840f-e8043b9e8729\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-bs6c2" Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.851333 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/762bc2c2-d5b7-4508-840f-e8043b9e8729-trusted-ca\") pod \"image-registry-5d9d95bf5b-bs6c2\" (UID: \"762bc2c2-d5b7-4508-840f-e8043b9e8729\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-bs6c2" Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.864608 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7jmh\" (UniqueName: \"kubernetes.io/projected/762bc2c2-d5b7-4508-840f-e8043b9e8729-kube-api-access-d7jmh\") pod \"image-registry-5d9d95bf5b-bs6c2\" (UID: \"762bc2c2-d5b7-4508-840f-e8043b9e8729\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-bs6c2" Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.869286 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/762bc2c2-d5b7-4508-840f-e8043b9e8729-bound-sa-token\") pod \"image-registry-5d9d95bf5b-bs6c2\" (UID: \"762bc2c2-d5b7-4508-840f-e8043b9e8729\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-bs6c2" Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.936320 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pn4sg" event={"ID":"db99c964-abd0-4bc6-a71a-79a9c5a3c718","Type":"ContainerStarted","Data":"8b163792bb97360e66ff49a6671a168e8360ed01068a2e1a81223660edca82ce"} Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.940854 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-srj7k" event={"ID":"65ded1b5-0551-47c3-b32f-646318c3055a","Type":"ContainerStarted","Data":"deeebbdc599aa21a07f910214d49544d44bec669410e0dad93711ff84ede3673"} Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.962264 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pn4sg" podStartSLOduration=3.349949613 podStartE2EDuration="3.962236958s" podCreationTimestamp="2026-01-22 11:54:15 +0000 UTC" firstStartedPulling="2026-01-22 11:54:16.91868749 +0000 UTC m=+391.662635831" lastFinishedPulling="2026-01-22 11:54:17.530974835 +0000 UTC m=+392.274923176" observedRunningTime="2026-01-22 11:54:18.957365756 +0000 UTC m=+393.701314097" watchObservedRunningTime="2026-01-22 11:54:18.962236958 +0000 UTC m=+393.706185299" Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.974091 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-bs6c2" Jan 22 11:54:19 crc kubenswrapper[5120]: I0122 11:54:19.135926 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7xvj9"] Jan 22 11:54:19 crc kubenswrapper[5120]: I0122 11:54:19.203859 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jck2s"] Jan 22 11:54:19 crc kubenswrapper[5120]: I0122 11:54:19.237696 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jck2s"] Jan 22 11:54:19 crc kubenswrapper[5120]: I0122 11:54:19.237994 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jck2s" Jan 22 11:54:19 crc kubenswrapper[5120]: I0122 11:54:19.241176 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Jan 22 11:54:19 crc kubenswrapper[5120]: I0122 11:54:19.349251 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8pxw\" (UniqueName: \"kubernetes.io/projected/3a14b1ee-af9d-4a1e-863f-c69c216c25d2-kube-api-access-c8pxw\") pod \"community-operators-jck2s\" (UID: \"3a14b1ee-af9d-4a1e-863f-c69c216c25d2\") " pod="openshift-marketplace/community-operators-jck2s" Jan 22 11:54:19 crc kubenswrapper[5120]: I0122 11:54:19.349726 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a14b1ee-af9d-4a1e-863f-c69c216c25d2-utilities\") pod \"community-operators-jck2s\" (UID: \"3a14b1ee-af9d-4a1e-863f-c69c216c25d2\") " pod="openshift-marketplace/community-operators-jck2s" Jan 22 11:54:19 crc kubenswrapper[5120]: I0122 11:54:19.349797 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a14b1ee-af9d-4a1e-863f-c69c216c25d2-catalog-content\") pod \"community-operators-jck2s\" (UID: \"3a14b1ee-af9d-4a1e-863f-c69c216c25d2\") " pod="openshift-marketplace/community-operators-jck2s" Jan 22 11:54:19 crc kubenswrapper[5120]: I0122 11:54:19.451394 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a14b1ee-af9d-4a1e-863f-c69c216c25d2-utilities\") pod \"community-operators-jck2s\" (UID: \"3a14b1ee-af9d-4a1e-863f-c69c216c25d2\") " pod="openshift-marketplace/community-operators-jck2s" Jan 22 11:54:19 crc kubenswrapper[5120]: I0122 11:54:19.451496 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a14b1ee-af9d-4a1e-863f-c69c216c25d2-catalog-content\") pod \"community-operators-jck2s\" (UID: \"3a14b1ee-af9d-4a1e-863f-c69c216c25d2\") " pod="openshift-marketplace/community-operators-jck2s" Jan 22 11:54:19 crc kubenswrapper[5120]: I0122 11:54:19.451545 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-c8pxw\" (UniqueName: \"kubernetes.io/projected/3a14b1ee-af9d-4a1e-863f-c69c216c25d2-kube-api-access-c8pxw\") pod \"community-operators-jck2s\" (UID: \"3a14b1ee-af9d-4a1e-863f-c69c216c25d2\") " pod="openshift-marketplace/community-operators-jck2s" Jan 22 11:54:19 crc kubenswrapper[5120]: I0122 11:54:19.452735 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a14b1ee-af9d-4a1e-863f-c69c216c25d2-utilities\") pod \"community-operators-jck2s\" (UID: \"3a14b1ee-af9d-4a1e-863f-c69c216c25d2\") " pod="openshift-marketplace/community-operators-jck2s" Jan 22 11:54:19 crc kubenswrapper[5120]: I0122 11:54:19.452843 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a14b1ee-af9d-4a1e-863f-c69c216c25d2-catalog-content\") pod \"community-operators-jck2s\" (UID: \"3a14b1ee-af9d-4a1e-863f-c69c216c25d2\") " pod="openshift-marketplace/community-operators-jck2s" Jan 22 11:54:19 crc kubenswrapper[5120]: I0122 11:54:19.477881 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8pxw\" (UniqueName: \"kubernetes.io/projected/3a14b1ee-af9d-4a1e-863f-c69c216c25d2-kube-api-access-c8pxw\") pod \"community-operators-jck2s\" (UID: \"3a14b1ee-af9d-4a1e-863f-c69c216c25d2\") " pod="openshift-marketplace/community-operators-jck2s" Jan 22 11:54:19 crc kubenswrapper[5120]: I0122 11:54:19.501182 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-bs6c2"] Jan 22 11:54:19 crc kubenswrapper[5120]: W0122 11:54:19.504530 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod762bc2c2_d5b7_4508_840f_e8043b9e8729.slice/crio-fb9fae42b28e16da26285bfa0524ee9223b875c5bc21a3ef2b12a6e893c44b85 WatchSource:0}: Error finding container fb9fae42b28e16da26285bfa0524ee9223b875c5bc21a3ef2b12a6e893c44b85: Status 404 returned error can't find the container with id fb9fae42b28e16da26285bfa0524ee9223b875c5bc21a3ef2b12a6e893c44b85 Jan 22 11:54:19 crc kubenswrapper[5120]: I0122 11:54:19.597574 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jck2s" Jan 22 11:54:19 crc kubenswrapper[5120]: I0122 11:54:19.952727 5120 generic.go:358] "Generic (PLEG): container finished" podID="90af06b6-8b8b-48f3-bfb2-541ef60610fa" containerID="7b4d3d345283b42169dd141b69a4f9d99e8dc1bc2646babddf8c2211a8a99a8f" exitCode=0 Jan 22 11:54:19 crc kubenswrapper[5120]: I0122 11:54:19.952868 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7xvj9" event={"ID":"90af06b6-8b8b-48f3-bfb2-541ef60610fa","Type":"ContainerDied","Data":"7b4d3d345283b42169dd141b69a4f9d99e8dc1bc2646babddf8c2211a8a99a8f"} Jan 22 11:54:19 crc kubenswrapper[5120]: I0122 11:54:19.953451 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7xvj9" event={"ID":"90af06b6-8b8b-48f3-bfb2-541ef60610fa","Type":"ContainerStarted","Data":"a2030e4b672505bce8a94fc526c57daa5ff25ec625e4434e96bdddcbf471ca63"} Jan 22 11:54:19 crc kubenswrapper[5120]: I0122 11:54:19.956667 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-bs6c2" event={"ID":"762bc2c2-d5b7-4508-840f-e8043b9e8729","Type":"ContainerStarted","Data":"d162068f9a4de740a5e6f36adb3441fc33cfff04a7b7ec1d8c5f15407bca9a38"} Jan 22 11:54:19 crc kubenswrapper[5120]: I0122 11:54:19.956700 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-bs6c2" event={"ID":"762bc2c2-d5b7-4508-840f-e8043b9e8729","Type":"ContainerStarted","Data":"fb9fae42b28e16da26285bfa0524ee9223b875c5bc21a3ef2b12a6e893c44b85"} Jan 22 11:54:19 crc kubenswrapper[5120]: I0122 11:54:19.957177 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-bs6c2" Jan 22 11:54:19 crc kubenswrapper[5120]: I0122 11:54:19.964923 5120 generic.go:358] "Generic (PLEG): container finished" podID="65ded1b5-0551-47c3-b32f-646318c3055a" containerID="deeebbdc599aa21a07f910214d49544d44bec669410e0dad93711ff84ede3673" exitCode=0 Jan 22 11:54:19 crc kubenswrapper[5120]: I0122 11:54:19.965649 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-srj7k" event={"ID":"65ded1b5-0551-47c3-b32f-646318c3055a","Type":"ContainerDied","Data":"deeebbdc599aa21a07f910214d49544d44bec669410e0dad93711ff84ede3673"} Jan 22 11:54:20 crc kubenswrapper[5120]: I0122 11:54:20.004938 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-5d9d95bf5b-bs6c2" podStartSLOduration=2.004906899 podStartE2EDuration="2.004906899s" podCreationTimestamp="2026-01-22 11:54:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:54:20.00333552 +0000 UTC m=+394.747284031" watchObservedRunningTime="2026-01-22 11:54:20.004906899 +0000 UTC m=+394.748855250" Jan 22 11:54:20 crc kubenswrapper[5120]: I0122 11:54:20.060801 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jck2s"] Jan 22 11:54:20 crc kubenswrapper[5120]: I0122 11:54:20.972849 5120 generic.go:358] "Generic (PLEG): container finished" podID="3a14b1ee-af9d-4a1e-863f-c69c216c25d2" containerID="649db102ccc0f8cb8cd3bd319946592ecb9fc3671a3f04f08f8b9073bff96551" exitCode=0 Jan 22 11:54:20 crc kubenswrapper[5120]: I0122 11:54:20.972995 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jck2s" event={"ID":"3a14b1ee-af9d-4a1e-863f-c69c216c25d2","Type":"ContainerDied","Data":"649db102ccc0f8cb8cd3bd319946592ecb9fc3671a3f04f08f8b9073bff96551"} Jan 22 11:54:20 crc kubenswrapper[5120]: I0122 11:54:20.973454 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jck2s" event={"ID":"3a14b1ee-af9d-4a1e-863f-c69c216c25d2","Type":"ContainerStarted","Data":"2a0f57c0aa97cf7dcf95dd065cd65721088c61c61c21a30486701169d1432c11"} Jan 22 11:54:20 crc kubenswrapper[5120]: I0122 11:54:20.978509 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7xvj9" event={"ID":"90af06b6-8b8b-48f3-bfb2-541ef60610fa","Type":"ContainerStarted","Data":"91edefe80e0e0cca5e84c20ba39d057f8947cb6f2d19f245571dd370d32d1d53"} Jan 22 11:54:20 crc kubenswrapper[5120]: I0122 11:54:20.986722 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-srj7k" event={"ID":"65ded1b5-0551-47c3-b32f-646318c3055a","Type":"ContainerStarted","Data":"d0b6a0bd27b9a1fed369139925f0f56690de1df2dfc81ef1bb38d261dd735ba3"} Jan 22 11:54:21 crc kubenswrapper[5120]: I0122 11:54:21.040712 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-srj7k" podStartSLOduration=4.262153955 podStartE2EDuration="5.040682985s" podCreationTimestamp="2026-01-22 11:54:16 +0000 UTC" firstStartedPulling="2026-01-22 11:54:17.929676253 +0000 UTC m=+392.673624594" lastFinishedPulling="2026-01-22 11:54:18.708205293 +0000 UTC m=+393.452153624" observedRunningTime="2026-01-22 11:54:21.039112416 +0000 UTC m=+395.783060787" watchObservedRunningTime="2026-01-22 11:54:21.040682985 +0000 UTC m=+395.784631346" Jan 22 11:54:22 crc kubenswrapper[5120]: I0122 11:54:22.005750 5120 generic.go:358] "Generic (PLEG): container finished" podID="3a14b1ee-af9d-4a1e-863f-c69c216c25d2" containerID="d53e2125a316325485ec382f823ee992c396b419ff0e3304341d7a5ba55c81f2" exitCode=0 Jan 22 11:54:22 crc kubenswrapper[5120]: I0122 11:54:22.005852 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jck2s" event={"ID":"3a14b1ee-af9d-4a1e-863f-c69c216c25d2","Type":"ContainerDied","Data":"d53e2125a316325485ec382f823ee992c396b419ff0e3304341d7a5ba55c81f2"} Jan 22 11:54:22 crc kubenswrapper[5120]: I0122 11:54:22.011240 5120 generic.go:358] "Generic (PLEG): container finished" podID="90af06b6-8b8b-48f3-bfb2-541ef60610fa" containerID="91edefe80e0e0cca5e84c20ba39d057f8947cb6f2d19f245571dd370d32d1d53" exitCode=0 Jan 22 11:54:22 crc kubenswrapper[5120]: I0122 11:54:22.012075 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7xvj9" event={"ID":"90af06b6-8b8b-48f3-bfb2-541ef60610fa","Type":"ContainerDied","Data":"91edefe80e0e0cca5e84c20ba39d057f8947cb6f2d19f245571dd370d32d1d53"} Jan 22 11:54:23 crc kubenswrapper[5120]: I0122 11:54:23.017719 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jck2s" event={"ID":"3a14b1ee-af9d-4a1e-863f-c69c216c25d2","Type":"ContainerStarted","Data":"07616b4ab0fbe72a8b40083365529f358718d9f1e3bbd8c71576e020bf90a90a"} Jan 22 11:54:23 crc kubenswrapper[5120]: I0122 11:54:23.020165 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7xvj9" event={"ID":"90af06b6-8b8b-48f3-bfb2-541ef60610fa","Type":"ContainerStarted","Data":"6b4da77cc17f35988344112f756922a38f85f7da0088f93e78f0a8d17cdb8c38"} Jan 22 11:54:23 crc kubenswrapper[5120]: I0122 11:54:23.040787 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jck2s" podStartSLOduration=3.468112959 podStartE2EDuration="4.040766404s" podCreationTimestamp="2026-01-22 11:54:19 +0000 UTC" firstStartedPulling="2026-01-22 11:54:20.973805985 +0000 UTC m=+395.717754326" lastFinishedPulling="2026-01-22 11:54:21.54645943 +0000 UTC m=+396.290407771" observedRunningTime="2026-01-22 11:54:23.036941728 +0000 UTC m=+397.780890079" watchObservedRunningTime="2026-01-22 11:54:23.040766404 +0000 UTC m=+397.784714745" Jan 22 11:54:23 crc kubenswrapper[5120]: I0122 11:54:23.062049 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7xvj9" podStartSLOduration=4.217636171 podStartE2EDuration="5.062029188s" podCreationTimestamp="2026-01-22 11:54:18 +0000 UTC" firstStartedPulling="2026-01-22 11:54:19.954169173 +0000 UTC m=+394.698117514" lastFinishedPulling="2026-01-22 11:54:20.79856219 +0000 UTC m=+395.542510531" observedRunningTime="2026-01-22 11:54:23.060000138 +0000 UTC m=+397.803948489" watchObservedRunningTime="2026-01-22 11:54:23.062029188 +0000 UTC m=+397.805977529" Jan 22 11:54:26 crc kubenswrapper[5120]: I0122 11:54:26.143359 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pn4sg" Jan 22 11:54:26 crc kubenswrapper[5120]: I0122 11:54:26.143831 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-pn4sg" Jan 22 11:54:26 crc kubenswrapper[5120]: I0122 11:54:26.187181 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pn4sg" Jan 22 11:54:27 crc kubenswrapper[5120]: I0122 11:54:27.092133 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pn4sg" Jan 22 11:54:27 crc kubenswrapper[5120]: I0122 11:54:27.173849 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-srj7k" Jan 22 11:54:27 crc kubenswrapper[5120]: I0122 11:54:27.174512 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-srj7k" Jan 22 11:54:27 crc kubenswrapper[5120]: I0122 11:54:27.214257 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-srj7k" Jan 22 11:54:28 crc kubenswrapper[5120]: I0122 11:54:28.088855 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-srj7k" Jan 22 11:54:28 crc kubenswrapper[5120]: I0122 11:54:28.534826 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7xvj9" Jan 22 11:54:28 crc kubenswrapper[5120]: I0122 11:54:28.535505 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-7xvj9" Jan 22 11:54:28 crc kubenswrapper[5120]: I0122 11:54:28.579041 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7xvj9" Jan 22 11:54:29 crc kubenswrapper[5120]: I0122 11:54:29.089791 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7xvj9" Jan 22 11:54:29 crc kubenswrapper[5120]: I0122 11:54:29.598372 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-jck2s" Jan 22 11:54:29 crc kubenswrapper[5120]: I0122 11:54:29.598823 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jck2s" Jan 22 11:54:29 crc kubenswrapper[5120]: I0122 11:54:29.643424 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jck2s" Jan 22 11:54:30 crc kubenswrapper[5120]: I0122 11:54:30.104126 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jck2s" Jan 22 11:54:40 crc kubenswrapper[5120]: I0122 11:54:40.994113 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-bs6c2" Jan 22 11:54:41 crc kubenswrapper[5120]: I0122 11:54:41.061789 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-49gkx"] Jan 22 11:55:01 crc kubenswrapper[5120]: I0122 11:55:01.972828 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:55:01 crc kubenswrapper[5120]: I0122 11:55:01.973556 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:55:06 crc kubenswrapper[5120]: I0122 11:55:06.107537 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-image-registry/image-registry-66587d64c8-49gkx" podUID="e16334d5-3fa8-48de-a8e0-af1f9fa51926" containerName="registry" containerID="cri-o://e1bbb65cdff1f34e73b67d92dec5e5520f1d8e88ebcd7bef109e31c63042510c" gracePeriod=30 Jan 22 11:55:06 crc kubenswrapper[5120]: I0122 11:55:06.284251 5120 generic.go:358] "Generic (PLEG): container finished" podID="e16334d5-3fa8-48de-a8e0-af1f9fa51926" containerID="e1bbb65cdff1f34e73b67d92dec5e5520f1d8e88ebcd7bef109e31c63042510c" exitCode=0 Jan 22 11:55:06 crc kubenswrapper[5120]: I0122 11:55:06.284537 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-49gkx" event={"ID":"e16334d5-3fa8-48de-a8e0-af1f9fa51926","Type":"ContainerDied","Data":"e1bbb65cdff1f34e73b67d92dec5e5520f1d8e88ebcd7bef109e31c63042510c"} Jan 22 11:55:06 crc kubenswrapper[5120]: I0122 11:55:06.516818 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:55:06 crc kubenswrapper[5120]: I0122 11:55:06.622660 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e16334d5-3fa8-48de-a8e0-af1f9fa51926-bound-sa-token\") pod \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " Jan 22 11:55:06 crc kubenswrapper[5120]: I0122 11:55:06.623018 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mg7w\" (UniqueName: \"kubernetes.io/projected/e16334d5-3fa8-48de-a8e0-af1f9fa51926-kube-api-access-5mg7w\") pod \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " Jan 22 11:55:06 crc kubenswrapper[5120]: I0122 11:55:06.623075 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e16334d5-3fa8-48de-a8e0-af1f9fa51926-ca-trust-extracted\") pod \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " Jan 22 11:55:06 crc kubenswrapper[5120]: I0122 11:55:06.623101 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e16334d5-3fa8-48de-a8e0-af1f9fa51926-registry-certificates\") pod \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " Jan 22 11:55:06 crc kubenswrapper[5120]: I0122 11:55:06.623177 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e16334d5-3fa8-48de-a8e0-af1f9fa51926-registry-tls\") pod \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " Jan 22 11:55:06 crc kubenswrapper[5120]: I0122 11:55:06.623313 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " Jan 22 11:55:06 crc kubenswrapper[5120]: I0122 11:55:06.623376 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e16334d5-3fa8-48de-a8e0-af1f9fa51926-trusted-ca\") pod \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " Jan 22 11:55:06 crc kubenswrapper[5120]: I0122 11:55:06.623433 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e16334d5-3fa8-48de-a8e0-af1f9fa51926-installation-pull-secrets\") pod \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " Jan 22 11:55:06 crc kubenswrapper[5120]: I0122 11:55:06.624267 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e16334d5-3fa8-48de-a8e0-af1f9fa51926-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "e16334d5-3fa8-48de-a8e0-af1f9fa51926" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:55:06 crc kubenswrapper[5120]: I0122 11:55:06.624320 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e16334d5-3fa8-48de-a8e0-af1f9fa51926-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "e16334d5-3fa8-48de-a8e0-af1f9fa51926" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:55:06 crc kubenswrapper[5120]: I0122 11:55:06.630138 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e16334d5-3fa8-48de-a8e0-af1f9fa51926-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "e16334d5-3fa8-48de-a8e0-af1f9fa51926" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:55:06 crc kubenswrapper[5120]: I0122 11:55:06.630148 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e16334d5-3fa8-48de-a8e0-af1f9fa51926-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "e16334d5-3fa8-48de-a8e0-af1f9fa51926" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:55:06 crc kubenswrapper[5120]: I0122 11:55:06.630407 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e16334d5-3fa8-48de-a8e0-af1f9fa51926-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "e16334d5-3fa8-48de-a8e0-af1f9fa51926" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:55:06 crc kubenswrapper[5120]: I0122 11:55:06.636066 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "registry-storage") pod "e16334d5-3fa8-48de-a8e0-af1f9fa51926" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Jan 22 11:55:06 crc kubenswrapper[5120]: I0122 11:55:06.642828 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e16334d5-3fa8-48de-a8e0-af1f9fa51926-kube-api-access-5mg7w" (OuterVolumeSpecName: "kube-api-access-5mg7w") pod "e16334d5-3fa8-48de-a8e0-af1f9fa51926" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926"). InnerVolumeSpecName "kube-api-access-5mg7w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:55:06 crc kubenswrapper[5120]: I0122 11:55:06.651302 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e16334d5-3fa8-48de-a8e0-af1f9fa51926-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "e16334d5-3fa8-48de-a8e0-af1f9fa51926" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:55:06 crc kubenswrapper[5120]: I0122 11:55:06.725224 5120 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e16334d5-3fa8-48de-a8e0-af1f9fa51926-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 11:55:06 crc kubenswrapper[5120]: I0122 11:55:06.725271 5120 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e16334d5-3fa8-48de-a8e0-af1f9fa51926-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 22 11:55:06 crc kubenswrapper[5120]: I0122 11:55:06.725291 5120 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e16334d5-3fa8-48de-a8e0-af1f9fa51926-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 22 11:55:06 crc kubenswrapper[5120]: I0122 11:55:06.725305 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5mg7w\" (UniqueName: \"kubernetes.io/projected/e16334d5-3fa8-48de-a8e0-af1f9fa51926-kube-api-access-5mg7w\") on node \"crc\" DevicePath \"\"" Jan 22 11:55:06 crc kubenswrapper[5120]: I0122 11:55:06.725317 5120 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e16334d5-3fa8-48de-a8e0-af1f9fa51926-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 22 11:55:06 crc kubenswrapper[5120]: I0122 11:55:06.725328 5120 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e16334d5-3fa8-48de-a8e0-af1f9fa51926-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 22 11:55:06 crc kubenswrapper[5120]: I0122 11:55:06.725341 5120 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e16334d5-3fa8-48de-a8e0-af1f9fa51926-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 22 11:55:07 crc kubenswrapper[5120]: I0122 11:55:07.291582 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-49gkx" event={"ID":"e16334d5-3fa8-48de-a8e0-af1f9fa51926","Type":"ContainerDied","Data":"30738daefd26ec1936e210196218667fac004e9fbe6021d4a2265a6c692aabac"} Jan 22 11:55:07 crc kubenswrapper[5120]: I0122 11:55:07.291646 5120 scope.go:117] "RemoveContainer" containerID="e1bbb65cdff1f34e73b67d92dec5e5520f1d8e88ebcd7bef109e31c63042510c" Jan 22 11:55:07 crc kubenswrapper[5120]: I0122 11:55:07.293232 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:55:07 crc kubenswrapper[5120]: I0122 11:55:07.335278 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-49gkx"] Jan 22 11:55:07 crc kubenswrapper[5120]: I0122 11:55:07.343088 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-49gkx"] Jan 22 11:55:07 crc kubenswrapper[5120]: I0122 11:55:07.582679 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e16334d5-3fa8-48de-a8e0-af1f9fa51926" path="/var/lib/kubelet/pods/e16334d5-3fa8-48de-a8e0-af1f9fa51926/volumes" Jan 22 11:55:31 crc kubenswrapper[5120]: I0122 11:55:31.972333 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:55:31 crc kubenswrapper[5120]: I0122 11:55:31.974208 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:56:00 crc kubenswrapper[5120]: I0122 11:56:00.159080 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29484716-phf4d"] Jan 22 11:56:00 crc kubenswrapper[5120]: I0122 11:56:00.160231 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e16334d5-3fa8-48de-a8e0-af1f9fa51926" containerName="registry" Jan 22 11:56:00 crc kubenswrapper[5120]: I0122 11:56:00.160245 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="e16334d5-3fa8-48de-a8e0-af1f9fa51926" containerName="registry" Jan 22 11:56:00 crc kubenswrapper[5120]: I0122 11:56:00.160376 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="e16334d5-3fa8-48de-a8e0-af1f9fa51926" containerName="registry" Jan 22 11:56:00 crc kubenswrapper[5120]: I0122 11:56:00.182351 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484716-phf4d"] Jan 22 11:56:00 crc kubenswrapper[5120]: I0122 11:56:00.182522 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484716-phf4d" Jan 22 11:56:00 crc kubenswrapper[5120]: I0122 11:56:00.186429 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 22 11:56:00 crc kubenswrapper[5120]: I0122 11:56:00.187655 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g2chw\"" Jan 22 11:56:00 crc kubenswrapper[5120]: I0122 11:56:00.188090 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 22 11:56:00 crc kubenswrapper[5120]: I0122 11:56:00.308495 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfwr8\" (UniqueName: \"kubernetes.io/projected/a45690da-bfac-4359-88d2-e604fb44508e-kube-api-access-gfwr8\") pod \"auto-csr-approver-29484716-phf4d\" (UID: \"a45690da-bfac-4359-88d2-e604fb44508e\") " pod="openshift-infra/auto-csr-approver-29484716-phf4d" Jan 22 11:56:00 crc kubenswrapper[5120]: I0122 11:56:00.409343 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gfwr8\" (UniqueName: \"kubernetes.io/projected/a45690da-bfac-4359-88d2-e604fb44508e-kube-api-access-gfwr8\") pod \"auto-csr-approver-29484716-phf4d\" (UID: \"a45690da-bfac-4359-88d2-e604fb44508e\") " pod="openshift-infra/auto-csr-approver-29484716-phf4d" Jan 22 11:56:00 crc kubenswrapper[5120]: I0122 11:56:00.429872 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfwr8\" (UniqueName: \"kubernetes.io/projected/a45690da-bfac-4359-88d2-e604fb44508e-kube-api-access-gfwr8\") pod \"auto-csr-approver-29484716-phf4d\" (UID: \"a45690da-bfac-4359-88d2-e604fb44508e\") " pod="openshift-infra/auto-csr-approver-29484716-phf4d" Jan 22 11:56:00 crc kubenswrapper[5120]: I0122 11:56:00.499885 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484716-phf4d" Jan 22 11:56:00 crc kubenswrapper[5120]: I0122 11:56:00.951840 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484716-phf4d"] Jan 22 11:56:00 crc kubenswrapper[5120]: W0122 11:56:00.960604 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda45690da_bfac_4359_88d2_e604fb44508e.slice/crio-a89b7357e55c42a1482e9c53a2a331e370948f071ff18748e44904052b848977 WatchSource:0}: Error finding container a89b7357e55c42a1482e9c53a2a331e370948f071ff18748e44904052b848977: Status 404 returned error can't find the container with id a89b7357e55c42a1482e9c53a2a331e370948f071ff18748e44904052b848977 Jan 22 11:56:01 crc kubenswrapper[5120]: I0122 11:56:01.642440 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484716-phf4d" event={"ID":"a45690da-bfac-4359-88d2-e604fb44508e","Type":"ContainerStarted","Data":"a89b7357e55c42a1482e9c53a2a331e370948f071ff18748e44904052b848977"} Jan 22 11:56:01 crc kubenswrapper[5120]: I0122 11:56:01.973258 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:56:01 crc kubenswrapper[5120]: I0122 11:56:01.973407 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:56:01 crc kubenswrapper[5120]: I0122 11:56:01.973465 5120 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dq269" Jan 22 11:56:01 crc kubenswrapper[5120]: I0122 11:56:01.974197 5120 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e857eb1297fb678314f51a1be1533aaadb53a0e5183e6c42cc64ea1b07667a10"} pod="openshift-machine-config-operator/machine-config-daemon-dq269" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 11:56:01 crc kubenswrapper[5120]: I0122 11:56:01.974262 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" containerID="cri-o://e857eb1297fb678314f51a1be1533aaadb53a0e5183e6c42cc64ea1b07667a10" gracePeriod=600 Jan 22 11:56:02 crc kubenswrapper[5120]: I0122 11:56:02.651543 5120 generic.go:358] "Generic (PLEG): container finished" podID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerID="e857eb1297fb678314f51a1be1533aaadb53a0e5183e6c42cc64ea1b07667a10" exitCode=0 Jan 22 11:56:02 crc kubenswrapper[5120]: I0122 11:56:02.651626 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" event={"ID":"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9","Type":"ContainerDied","Data":"e857eb1297fb678314f51a1be1533aaadb53a0e5183e6c42cc64ea1b07667a10"} Jan 22 11:56:02 crc kubenswrapper[5120]: I0122 11:56:02.651977 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" event={"ID":"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9","Type":"ContainerStarted","Data":"bce4cc383007abddfe015e880c39e78b9257e350f68f93cf80d0801b94ef0ab7"} Jan 22 11:56:02 crc kubenswrapper[5120]: I0122 11:56:02.652005 5120 scope.go:117] "RemoveContainer" containerID="850c532d98a8bbc54351ca3b791b2314fd23331e43f96e8f0161ba791781ae24" Jan 22 11:56:04 crc kubenswrapper[5120]: I0122 11:56:04.499785 5120 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kubelet-serving" csr="csr-5v2zf" Jan 22 11:56:04 crc kubenswrapper[5120]: I0122 11:56:04.524293 5120 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kubelet-serving" csr="csr-5v2zf" Jan 22 11:56:04 crc kubenswrapper[5120]: I0122 11:56:04.674159 5120 generic.go:358] "Generic (PLEG): container finished" podID="a45690da-bfac-4359-88d2-e604fb44508e" containerID="50058b8b91e5dd9329c621c05d95a98bf79e0360bf7ed78ecfbcba7624fecffa" exitCode=0 Jan 22 11:56:04 crc kubenswrapper[5120]: I0122 11:56:04.674292 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484716-phf4d" event={"ID":"a45690da-bfac-4359-88d2-e604fb44508e","Type":"ContainerDied","Data":"50058b8b91e5dd9329c621c05d95a98bf79e0360bf7ed78ecfbcba7624fecffa"} Jan 22 11:56:05 crc kubenswrapper[5120]: I0122 11:56:05.526730 5120 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2026-02-21 11:51:04 +0000 UTC" deadline="2026-02-16 05:26:13.822479534 +0000 UTC" Jan 22 11:56:05 crc kubenswrapper[5120]: I0122 11:56:05.526780 5120 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="593h30m8.295704s" Jan 22 11:56:05 crc kubenswrapper[5120]: I0122 11:56:05.906876 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484716-phf4d" Jan 22 11:56:06 crc kubenswrapper[5120]: I0122 11:56:06.068163 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfwr8\" (UniqueName: \"kubernetes.io/projected/a45690da-bfac-4359-88d2-e604fb44508e-kube-api-access-gfwr8\") pod \"a45690da-bfac-4359-88d2-e604fb44508e\" (UID: \"a45690da-bfac-4359-88d2-e604fb44508e\") " Jan 22 11:56:06 crc kubenswrapper[5120]: I0122 11:56:06.074368 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a45690da-bfac-4359-88d2-e604fb44508e-kube-api-access-gfwr8" (OuterVolumeSpecName: "kube-api-access-gfwr8") pod "a45690da-bfac-4359-88d2-e604fb44508e" (UID: "a45690da-bfac-4359-88d2-e604fb44508e"). InnerVolumeSpecName "kube-api-access-gfwr8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:56:06 crc kubenswrapper[5120]: I0122 11:56:06.169728 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gfwr8\" (UniqueName: \"kubernetes.io/projected/a45690da-bfac-4359-88d2-e604fb44508e-kube-api-access-gfwr8\") on node \"crc\" DevicePath \"\"" Jan 22 11:56:06 crc kubenswrapper[5120]: I0122 11:56:06.527375 5120 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2026-02-21 11:51:04 +0000 UTC" deadline="2026-02-18 04:33:37.611037234 +0000 UTC" Jan 22 11:56:06 crc kubenswrapper[5120]: I0122 11:56:06.527413 5120 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="640h37m31.083626637s" Jan 22 11:56:06 crc kubenswrapper[5120]: I0122 11:56:06.688313 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484716-phf4d" event={"ID":"a45690da-bfac-4359-88d2-e604fb44508e","Type":"ContainerDied","Data":"a89b7357e55c42a1482e9c53a2a331e370948f071ff18748e44904052b848977"} Jan 22 11:56:06 crc kubenswrapper[5120]: I0122 11:56:06.688360 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a89b7357e55c42a1482e9c53a2a331e370948f071ff18748e44904052b848977" Jan 22 11:56:06 crc kubenswrapper[5120]: I0122 11:56:06.688359 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484716-phf4d" Jan 22 11:57:45 crc kubenswrapper[5120]: I0122 11:57:45.831229 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 22 11:57:45 crc kubenswrapper[5120]: I0122 11:57:45.834586 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 22 11:58:00 crc kubenswrapper[5120]: I0122 11:58:00.145542 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29484718-tbtpd"] Jan 22 11:58:00 crc kubenswrapper[5120]: I0122 11:58:00.146765 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a45690da-bfac-4359-88d2-e604fb44508e" containerName="oc" Jan 22 11:58:00 crc kubenswrapper[5120]: I0122 11:58:00.146781 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="a45690da-bfac-4359-88d2-e604fb44508e" containerName="oc" Jan 22 11:58:00 crc kubenswrapper[5120]: I0122 11:58:00.146945 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="a45690da-bfac-4359-88d2-e604fb44508e" containerName="oc" Jan 22 11:58:00 crc kubenswrapper[5120]: I0122 11:58:00.168190 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484718-tbtpd"] Jan 22 11:58:00 crc kubenswrapper[5120]: I0122 11:58:00.168233 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484718-tbtpd" Jan 22 11:58:00 crc kubenswrapper[5120]: I0122 11:58:00.171523 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g2chw\"" Jan 22 11:58:00 crc kubenswrapper[5120]: I0122 11:58:00.171586 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 22 11:58:00 crc kubenswrapper[5120]: I0122 11:58:00.174838 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 22 11:58:00 crc kubenswrapper[5120]: I0122 11:58:00.279894 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gzsg\" (UniqueName: \"kubernetes.io/projected/b79a0076-aa90-4841-9865-b94aef438d2e-kube-api-access-5gzsg\") pod \"auto-csr-approver-29484718-tbtpd\" (UID: \"b79a0076-aa90-4841-9865-b94aef438d2e\") " pod="openshift-infra/auto-csr-approver-29484718-tbtpd" Jan 22 11:58:00 crc kubenswrapper[5120]: I0122 11:58:00.381948 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5gzsg\" (UniqueName: \"kubernetes.io/projected/b79a0076-aa90-4841-9865-b94aef438d2e-kube-api-access-5gzsg\") pod \"auto-csr-approver-29484718-tbtpd\" (UID: \"b79a0076-aa90-4841-9865-b94aef438d2e\") " pod="openshift-infra/auto-csr-approver-29484718-tbtpd" Jan 22 11:58:00 crc kubenswrapper[5120]: I0122 11:58:00.418081 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gzsg\" (UniqueName: \"kubernetes.io/projected/b79a0076-aa90-4841-9865-b94aef438d2e-kube-api-access-5gzsg\") pod \"auto-csr-approver-29484718-tbtpd\" (UID: \"b79a0076-aa90-4841-9865-b94aef438d2e\") " pod="openshift-infra/auto-csr-approver-29484718-tbtpd" Jan 22 11:58:00 crc kubenswrapper[5120]: I0122 11:58:00.504264 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484718-tbtpd" Jan 22 11:58:00 crc kubenswrapper[5120]: I0122 11:58:00.770344 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484718-tbtpd"] Jan 22 11:58:00 crc kubenswrapper[5120]: I0122 11:58:00.775664 5120 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 11:58:01 crc kubenswrapper[5120]: I0122 11:58:01.472403 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484718-tbtpd" event={"ID":"b79a0076-aa90-4841-9865-b94aef438d2e","Type":"ContainerStarted","Data":"badf357652e9ad1468f125bcc21c5e2857abc6f4573914f5411d17f0eb8c35f3"} Jan 22 11:58:02 crc kubenswrapper[5120]: I0122 11:58:02.481114 5120 generic.go:358] "Generic (PLEG): container finished" podID="b79a0076-aa90-4841-9865-b94aef438d2e" containerID="48535da82209ba80a74337bfe4adf5c3fb5d1066acf6b74856b7a35e8ae721fa" exitCode=0 Jan 22 11:58:02 crc kubenswrapper[5120]: I0122 11:58:02.481222 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484718-tbtpd" event={"ID":"b79a0076-aa90-4841-9865-b94aef438d2e","Type":"ContainerDied","Data":"48535da82209ba80a74337bfe4adf5c3fb5d1066acf6b74856b7a35e8ae721fa"} Jan 22 11:58:03 crc kubenswrapper[5120]: I0122 11:58:03.755908 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484718-tbtpd" Jan 22 11:58:03 crc kubenswrapper[5120]: I0122 11:58:03.831349 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gzsg\" (UniqueName: \"kubernetes.io/projected/b79a0076-aa90-4841-9865-b94aef438d2e-kube-api-access-5gzsg\") pod \"b79a0076-aa90-4841-9865-b94aef438d2e\" (UID: \"b79a0076-aa90-4841-9865-b94aef438d2e\") " Jan 22 11:58:03 crc kubenswrapper[5120]: I0122 11:58:03.838538 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b79a0076-aa90-4841-9865-b94aef438d2e-kube-api-access-5gzsg" (OuterVolumeSpecName: "kube-api-access-5gzsg") pod "b79a0076-aa90-4841-9865-b94aef438d2e" (UID: "b79a0076-aa90-4841-9865-b94aef438d2e"). InnerVolumeSpecName "kube-api-access-5gzsg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:58:03 crc kubenswrapper[5120]: I0122 11:58:03.933150 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5gzsg\" (UniqueName: \"kubernetes.io/projected/b79a0076-aa90-4841-9865-b94aef438d2e-kube-api-access-5gzsg\") on node \"crc\" DevicePath \"\"" Jan 22 11:58:04 crc kubenswrapper[5120]: I0122 11:58:04.498701 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484718-tbtpd" event={"ID":"b79a0076-aa90-4841-9865-b94aef438d2e","Type":"ContainerDied","Data":"badf357652e9ad1468f125bcc21c5e2857abc6f4573914f5411d17f0eb8c35f3"} Jan 22 11:58:04 crc kubenswrapper[5120]: I0122 11:58:04.498770 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="badf357652e9ad1468f125bcc21c5e2857abc6f4573914f5411d17f0eb8c35f3" Jan 22 11:58:04 crc kubenswrapper[5120]: I0122 11:58:04.498726 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484718-tbtpd" Jan 22 11:58:31 crc kubenswrapper[5120]: I0122 11:58:31.972372 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:58:31 crc kubenswrapper[5120]: I0122 11:58:31.973094 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:59:01 crc kubenswrapper[5120]: I0122 11:59:01.458434 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79"] Jan 22 11:59:01 crc kubenswrapper[5120]: I0122 11:59:01.461167 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79" podUID="cdb50da0-eb06-4959-b8da-70919924f77e" containerName="kube-rbac-proxy" containerID="cri-o://b21acaba3cb296157d5914b47ec901abef4ecd818f666b1cfb316d247e9b6411" gracePeriod=30 Jan 22 11:59:01 crc kubenswrapper[5120]: I0122 11:59:01.461258 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79" podUID="cdb50da0-eb06-4959-b8da-70919924f77e" containerName="ovnkube-cluster-manager" containerID="cri-o://53d59b7d2c319aaf356a45432146f39c690dafb55e7dcf1cae4ae5ee99919935" gracePeriod=30 Jan 22 11:59:01 crc kubenswrapper[5120]: I0122 11:59:01.682986 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-2mf7v"] Jan 22 11:59:01 crc kubenswrapper[5120]: I0122 11:59:01.684012 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" podUID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerName="nbdb" containerID="cri-o://fa6924cab3fb62a3d082f9ba370a96e5e7ab2d47c44c268324b727cb6cfbcd31" gracePeriod=30 Jan 22 11:59:01 crc kubenswrapper[5120]: I0122 11:59:01.684069 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" podUID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerName="ovn-acl-logging" containerID="cri-o://1c8b54f45344390a57a15807f13fc415b25522bda483800e1e6b4e1a80d11f4f" gracePeriod=30 Jan 22 11:59:01 crc kubenswrapper[5120]: I0122 11:59:01.684093 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" podUID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://f092db392417f256b4f0135f1ff3ff3d4129b64b53982c580d3655bc52b38860" gracePeriod=30 Jan 22 11:59:01 crc kubenswrapper[5120]: I0122 11:59:01.684170 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" podUID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerName="ovn-controller" containerID="cri-o://bb9a1f9ecf9941c93d405464147ed7fce485a179d00bfa3094934d0400409f25" gracePeriod=30 Jan 22 11:59:01 crc kubenswrapper[5120]: I0122 11:59:01.684163 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" podUID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerName="sbdb" containerID="cri-o://3f54e9ea68daffd338ce4d1b48fc95b48c8f4454371da3d34787786d2ec02aac" gracePeriod=30 Jan 22 11:59:01 crc kubenswrapper[5120]: I0122 11:59:01.684148 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" podUID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerName="northd" containerID="cri-o://a52fe62265acc53f59227988efecf2209707222abdac4d713d0a858d3eeb31cf" gracePeriod=30 Jan 22 11:59:01 crc kubenswrapper[5120]: I0122 11:59:01.684014 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" podUID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerName="kube-rbac-proxy-node" containerID="cri-o://e6f598572d7ee3f4456ac54c210e204149f4a9ec71c387867d3b396283eafec7" gracePeriod=30 Jan 22 11:59:01 crc kubenswrapper[5120]: I0122 11:59:01.750199 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" podUID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerName="ovnkube-controller" containerID="cri-o://29c29478ae7505ea16587db05884339bd9c66ee1da87d8da71e4d78fa0821e42" gracePeriod=30 Jan 22 11:59:01 crc kubenswrapper[5120]: I0122 11:59:01.911159 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4lzht_67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087/kube-multus/0.log" Jan 22 11:59:01 crc kubenswrapper[5120]: I0122 11:59:01.911205 5120 generic.go:358] "Generic (PLEG): container finished" podID="67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087" containerID="d29b8141fbabedfe7a0b24544216f57974fa5374814f1bca04930180d84aef59" exitCode=2 Jan 22 11:59:01 crc kubenswrapper[5120]: I0122 11:59:01.911307 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4lzht" event={"ID":"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087","Type":"ContainerDied","Data":"d29b8141fbabedfe7a0b24544216f57974fa5374814f1bca04930180d84aef59"} Jan 22 11:59:01 crc kubenswrapper[5120]: I0122 11:59:01.912998 5120 scope.go:117] "RemoveContainer" containerID="d29b8141fbabedfe7a0b24544216f57974fa5374814f1bca04930180d84aef59" Jan 22 11:59:01 crc kubenswrapper[5120]: I0122 11:59:01.913740 5120 generic.go:358] "Generic (PLEG): container finished" podID="cdb50da0-eb06-4959-b8da-70919924f77e" containerID="53d59b7d2c319aaf356a45432146f39c690dafb55e7dcf1cae4ae5ee99919935" exitCode=0 Jan 22 11:59:01 crc kubenswrapper[5120]: I0122 11:59:01.913795 5120 generic.go:358] "Generic (PLEG): container finished" podID="cdb50da0-eb06-4959-b8da-70919924f77e" containerID="b21acaba3cb296157d5914b47ec901abef4ecd818f666b1cfb316d247e9b6411" exitCode=0 Jan 22 11:59:01 crc kubenswrapper[5120]: I0122 11:59:01.913813 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79" event={"ID":"cdb50da0-eb06-4959-b8da-70919924f77e","Type":"ContainerDied","Data":"53d59b7d2c319aaf356a45432146f39c690dafb55e7dcf1cae4ae5ee99919935"} Jan 22 11:59:01 crc kubenswrapper[5120]: I0122 11:59:01.913846 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79" event={"ID":"cdb50da0-eb06-4959-b8da-70919924f77e","Type":"ContainerDied","Data":"b21acaba3cb296157d5914b47ec901abef4ecd818f666b1cfb316d247e9b6411"} Jan 22 11:59:01 crc kubenswrapper[5120]: I0122 11:59:01.920443 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2mf7v_dd62bdde-a6c1-42b3-9585-ba64c63cbb51/ovn-acl-logging/0.log" Jan 22 11:59:01 crc kubenswrapper[5120]: I0122 11:59:01.921017 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2mf7v_dd62bdde-a6c1-42b3-9585-ba64c63cbb51/ovn-controller/0.log" Jan 22 11:59:01 crc kubenswrapper[5120]: I0122 11:59:01.921521 5120 generic.go:358] "Generic (PLEG): container finished" podID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerID="29c29478ae7505ea16587db05884339bd9c66ee1da87d8da71e4d78fa0821e42" exitCode=0 Jan 22 11:59:01 crc kubenswrapper[5120]: I0122 11:59:01.921550 5120 generic.go:358] "Generic (PLEG): container finished" podID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerID="3f54e9ea68daffd338ce4d1b48fc95b48c8f4454371da3d34787786d2ec02aac" exitCode=0 Jan 22 11:59:01 crc kubenswrapper[5120]: I0122 11:59:01.921559 5120 generic.go:358] "Generic (PLEG): container finished" podID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerID="fa6924cab3fb62a3d082f9ba370a96e5e7ab2d47c44c268324b727cb6cfbcd31" exitCode=0 Jan 22 11:59:01 crc kubenswrapper[5120]: I0122 11:59:01.921572 5120 generic.go:358] "Generic (PLEG): container finished" podID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerID="f092db392417f256b4f0135f1ff3ff3d4129b64b53982c580d3655bc52b38860" exitCode=0 Jan 22 11:59:01 crc kubenswrapper[5120]: I0122 11:59:01.921580 5120 generic.go:358] "Generic (PLEG): container finished" podID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerID="e6f598572d7ee3f4456ac54c210e204149f4a9ec71c387867d3b396283eafec7" exitCode=0 Jan 22 11:59:01 crc kubenswrapper[5120]: I0122 11:59:01.921590 5120 generic.go:358] "Generic (PLEG): container finished" podID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerID="1c8b54f45344390a57a15807f13fc415b25522bda483800e1e6b4e1a80d11f4f" exitCode=143 Jan 22 11:59:01 crc kubenswrapper[5120]: I0122 11:59:01.921600 5120 generic.go:358] "Generic (PLEG): container finished" podID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerID="bb9a1f9ecf9941c93d405464147ed7fce485a179d00bfa3094934d0400409f25" exitCode=143 Jan 22 11:59:01 crc kubenswrapper[5120]: I0122 11:59:01.921891 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" event={"ID":"dd62bdde-a6c1-42b3-9585-ba64c63cbb51","Type":"ContainerDied","Data":"29c29478ae7505ea16587db05884339bd9c66ee1da87d8da71e4d78fa0821e42"} Jan 22 11:59:01 crc kubenswrapper[5120]: I0122 11:59:01.922060 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" event={"ID":"dd62bdde-a6c1-42b3-9585-ba64c63cbb51","Type":"ContainerDied","Data":"3f54e9ea68daffd338ce4d1b48fc95b48c8f4454371da3d34787786d2ec02aac"} Jan 22 11:59:01 crc kubenswrapper[5120]: I0122 11:59:01.922091 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" event={"ID":"dd62bdde-a6c1-42b3-9585-ba64c63cbb51","Type":"ContainerDied","Data":"fa6924cab3fb62a3d082f9ba370a96e5e7ab2d47c44c268324b727cb6cfbcd31"} Jan 22 11:59:01 crc kubenswrapper[5120]: I0122 11:59:01.922171 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" event={"ID":"dd62bdde-a6c1-42b3-9585-ba64c63cbb51","Type":"ContainerDied","Data":"f092db392417f256b4f0135f1ff3ff3d4129b64b53982c580d3655bc52b38860"} Jan 22 11:59:01 crc kubenswrapper[5120]: I0122 11:59:01.922183 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" event={"ID":"dd62bdde-a6c1-42b3-9585-ba64c63cbb51","Type":"ContainerDied","Data":"e6f598572d7ee3f4456ac54c210e204149f4a9ec71c387867d3b396283eafec7"} Jan 22 11:59:01 crc kubenswrapper[5120]: I0122 11:59:01.922197 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" event={"ID":"dd62bdde-a6c1-42b3-9585-ba64c63cbb51","Type":"ContainerDied","Data":"1c8b54f45344390a57a15807f13fc415b25522bda483800e1e6b4e1a80d11f4f"} Jan 22 11:59:01 crc kubenswrapper[5120]: I0122 11:59:01.922299 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" event={"ID":"dd62bdde-a6c1-42b3-9585-ba64c63cbb51","Type":"ContainerDied","Data":"bb9a1f9ecf9941c93d405464147ed7fce485a179d00bfa3094934d0400409f25"} Jan 22 11:59:01 crc kubenswrapper[5120]: I0122 11:59:01.973215 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:59:01 crc kubenswrapper[5120]: I0122 11:59:01.973353 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.471138 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2mf7v_dd62bdde-a6c1-42b3-9585-ba64c63cbb51/ovn-acl-logging/0.log" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.472199 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2mf7v_dd62bdde-a6c1-42b3-9585-ba64c63cbb51/ovn-controller/0.log" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.472831 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.475265 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.522316 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-kubelet\") pod \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.527114 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-systemd-units\") pod \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.527189 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-var-lib-openvswitch\") pod \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.524086 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-9xdkb"] Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.527222 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-run-systemd\") pod \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.522431 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "dd62bdde-a6c1-42b3-9585-ba64c63cbb51" (UID: "dd62bdde-a6c1-42b3-9585-ba64c63cbb51"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.527213 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "dd62bdde-a6c1-42b3-9585-ba64c63cbb51" (UID: "dd62bdde-a6c1-42b3-9585-ba64c63cbb51"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.527271 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-var-lib-cni-networks-ovn-kubernetes\") pod \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.527305 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lt4m\" (UniqueName: \"kubernetes.io/projected/cdb50da0-eb06-4959-b8da-70919924f77e-kube-api-access-9lt4m\") pod \"cdb50da0-eb06-4959-b8da-70919924f77e\" (UID: \"cdb50da0-eb06-4959-b8da-70919924f77e\") " Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.527312 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "dd62bdde-a6c1-42b3-9585-ba64c63cbb51" (UID: "dd62bdde-a6c1-42b3-9585-ba64c63cbb51"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.527370 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-slash\") pod \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.527423 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-ovnkube-script-lib\") pod \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.527422 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "dd62bdde-a6c1-42b3-9585-ba64c63cbb51" (UID: "dd62bdde-a6c1-42b3-9585-ba64c63cbb51"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.527462 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-cni-netd\") pod \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.527495 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "dd62bdde-a6c1-42b3-9585-ba64c63cbb51" (UID: "dd62bdde-a6c1-42b3-9585-ba64c63cbb51"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.527526 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-run-ovn\") pod \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.527581 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-cni-bin\") pod \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.527616 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-run-openvswitch\") pod \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.527667 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-run-netns\") pod \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.527625 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "dd62bdde-a6c1-42b3-9585-ba64c63cbb51" (UID: "dd62bdde-a6c1-42b3-9585-ba64c63cbb51"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.527713 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-node-log" (OuterVolumeSpecName: "node-log") pod "dd62bdde-a6c1-42b3-9585-ba64c63cbb51" (UID: "dd62bdde-a6c1-42b3-9585-ba64c63cbb51"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.527688 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-node-log\") pod \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.527739 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "dd62bdde-a6c1-42b3-9585-ba64c63cbb51" (UID: "dd62bdde-a6c1-42b3-9585-ba64c63cbb51"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.527800 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-etc-openvswitch\") pod \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.527800 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "dd62bdde-a6c1-42b3-9585-ba64c63cbb51" (UID: "dd62bdde-a6c1-42b3-9585-ba64c63cbb51"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.527853 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b79a0076-aa90-4841-9865-b94aef438d2e" containerName="oc" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.527905 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="b79a0076-aa90-4841-9865-b94aef438d2e" containerName="oc" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.527904 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "dd62bdde-a6c1-42b3-9585-ba64c63cbb51" (UID: "dd62bdde-a6c1-42b3-9585-ba64c63cbb51"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.527842 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-run-ovn-kubernetes\") pod \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.527918 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerName="ovn-controller" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.527950 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerName="ovn-controller" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.527902 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "dd62bdde-a6c1-42b3-9585-ba64c63cbb51" (UID: "dd62bdde-a6c1-42b3-9585-ba64c63cbb51"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.527996 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cdb50da0-eb06-4959-b8da-70919924f77e" containerName="kube-rbac-proxy" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528006 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdb50da0-eb06-4959-b8da-70919924f77e" containerName="kube-rbac-proxy" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528034 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerName="sbdb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528042 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerName="sbdb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528050 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cdb50da0-eb06-4959-b8da-70919924f77e-env-overrides\") pod \"cdb50da0-eb06-4959-b8da-70919924f77e\" (UID: \"cdb50da0-eb06-4959-b8da-70919924f77e\") " Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528093 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-env-overrides\") pod \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528122 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cdb50da0-eb06-4959-b8da-70919924f77e-ovn-control-plane-metrics-cert\") pod \"cdb50da0-eb06-4959-b8da-70919924f77e\" (UID: \"cdb50da0-eb06-4959-b8da-70919924f77e\") " Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.527800 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "dd62bdde-a6c1-42b3-9585-ba64c63cbb51" (UID: "dd62bdde-a6c1-42b3-9585-ba64c63cbb51"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528059 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cdb50da0-eb06-4959-b8da-70919924f77e" containerName="ovnkube-cluster-manager" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528175 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdb50da0-eb06-4959-b8da-70919924f77e" containerName="ovnkube-cluster-manager" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528192 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerName="kubecfg-setup" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528198 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerName="kubecfg-setup" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528218 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerName="ovnkube-controller" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528225 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerName="ovnkube-controller" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528235 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerName="northd" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528241 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerName="northd" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528247 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerName="kube-rbac-proxy-node" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528252 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerName="kube-rbac-proxy-node" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528258 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerName="kube-rbac-proxy-ovn-metrics" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528264 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerName="kube-rbac-proxy-ovn-metrics" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528272 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerName="nbdb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528278 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerName="nbdb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528276 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "dd62bdde-a6c1-42b3-9585-ba64c63cbb51" (UID: "dd62bdde-a6c1-42b3-9585-ba64c63cbb51"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528287 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerName="ovn-acl-logging" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528308 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerName="ovn-acl-logging" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528309 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cdb50da0-eb06-4959-b8da-70919924f77e-ovnkube-config\") pod \"cdb50da0-eb06-4959-b8da-70919924f77e\" (UID: \"cdb50da0-eb06-4959-b8da-70919924f77e\") " Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528344 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdzrm\" (UniqueName: \"kubernetes.io/projected/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-kube-api-access-zdzrm\") pod \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528368 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-log-socket\") pod \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528406 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-log-socket" (OuterVolumeSpecName: "log-socket") pod "dd62bdde-a6c1-42b3-9585-ba64c63cbb51" (UID: "dd62bdde-a6c1-42b3-9585-ba64c63cbb51"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528428 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-ovn-node-metrics-cert\") pod \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528456 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="cdb50da0-eb06-4959-b8da-70919924f77e" containerName="ovnkube-cluster-manager" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528470 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerName="ovn-controller" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528485 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerName="sbdb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528495 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerName="ovn-acl-logging" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528505 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerName="kube-rbac-proxy-ovn-metrics" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528514 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="b79a0076-aa90-4841-9865-b94aef438d2e" containerName="oc" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528524 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerName="nbdb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528535 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerName="ovnkube-controller" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528543 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerName="northd" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528554 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerName="kube-rbac-proxy-node" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528563 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="cdb50da0-eb06-4959-b8da-70919924f77e" containerName="kube-rbac-proxy" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528751 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "dd62bdde-a6c1-42b3-9585-ba64c63cbb51" (UID: "dd62bdde-a6c1-42b3-9585-ba64c63cbb51"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528828 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdb50da0-eb06-4959-b8da-70919924f77e-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "cdb50da0-eb06-4959-b8da-70919924f77e" (UID: "cdb50da0-eb06-4959-b8da-70919924f77e"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528865 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdb50da0-eb06-4959-b8da-70919924f77e-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "cdb50da0-eb06-4959-b8da-70919924f77e" (UID: "cdb50da0-eb06-4959-b8da-70919924f77e"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528883 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-slash" (OuterVolumeSpecName: "host-slash") pod "dd62bdde-a6c1-42b3-9585-ba64c63cbb51" (UID: "dd62bdde-a6c1-42b3-9585-ba64c63cbb51"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528913 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "dd62bdde-a6c1-42b3-9585-ba64c63cbb51" (UID: "dd62bdde-a6c1-42b3-9585-ba64c63cbb51"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528472 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-ovnkube-config\") pod \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.529513 5120 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.529534 5120 reconciler_common.go:299] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.529547 5120 reconciler_common.go:299] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.529559 5120 reconciler_common.go:299] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.529574 5120 reconciler_common.go:299] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.529587 5120 reconciler_common.go:299] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-slash\") on node \"crc\" DevicePath \"\"" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.529601 5120 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.529612 5120 reconciler_common.go:299] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.529625 5120 reconciler_common.go:299] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.529636 5120 reconciler_common.go:299] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.529646 5120 reconciler_common.go:299] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.529660 5120 reconciler_common.go:299] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.529671 5120 reconciler_common.go:299] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-node-log\") on node \"crc\" DevicePath \"\"" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.529683 5120 reconciler_common.go:299] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.529697 5120 reconciler_common.go:299] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.529710 5120 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cdb50da0-eb06-4959-b8da-70919924f77e-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.529722 5120 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.529735 5120 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cdb50da0-eb06-4959-b8da-70919924f77e-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.529745 5120 reconciler_common.go:299] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-log-socket\") on node \"crc\" DevicePath \"\"" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.534783 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-kube-api-access-zdzrm" (OuterVolumeSpecName: "kube-api-access-zdzrm") pod "dd62bdde-a6c1-42b3-9585-ba64c63cbb51" (UID: "dd62bdde-a6c1-42b3-9585-ba64c63cbb51"). InnerVolumeSpecName "kube-api-access-zdzrm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.535585 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "dd62bdde-a6c1-42b3-9585-ba64c63cbb51" (UID: "dd62bdde-a6c1-42b3-9585-ba64c63cbb51"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.537071 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdb50da0-eb06-4959-b8da-70919924f77e-kube-api-access-9lt4m" (OuterVolumeSpecName: "kube-api-access-9lt4m") pod "cdb50da0-eb06-4959-b8da-70919924f77e" (UID: "cdb50da0-eb06-4959-b8da-70919924f77e"). InnerVolumeSpecName "kube-api-access-9lt4m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.538294 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdb50da0-eb06-4959-b8da-70919924f77e-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "cdb50da0-eb06-4959-b8da-70919924f77e" (UID: "cdb50da0-eb06-4959-b8da-70919924f77e"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.547422 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "dd62bdde-a6c1-42b3-9585-ba64c63cbb51" (UID: "dd62bdde-a6c1-42b3-9585-ba64c63cbb51"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.579682 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-lvft9"] Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.579935 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.584761 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-lvft9" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.630528 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-host-kubelet\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.630583 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-etc-openvswitch\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.630615 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-host-cni-bin\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.630631 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-run-openvswitch\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.630730 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-run-ovn\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.630795 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f8707e23-b20a-4547-938b-1938b7cd5b7d-ovnkube-config\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.630836 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f8707e23-b20a-4547-938b-1938b7cd5b7d-env-overrides\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.630879 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-log-socket\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.630932 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.631044 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-node-log\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.631096 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jd7kb\" (UniqueName: \"kubernetes.io/projected/2b921c3f-0298-48a5-8020-2e7932ce381a-kube-api-access-jd7kb\") pod \"ovnkube-control-plane-97c9b6c48-lvft9\" (UID: \"2b921c3f-0298-48a5-8020-2e7932ce381a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-lvft9" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.631120 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-systemd-units\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.631137 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ck8wr\" (UniqueName: \"kubernetes.io/projected/f8707e23-b20a-4547-938b-1938b7cd5b7d-kube-api-access-ck8wr\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.631162 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-host-run-ovn-kubernetes\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.631260 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f8707e23-b20a-4547-938b-1938b7cd5b7d-ovn-node-metrics-cert\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.631298 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-host-run-netns\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.631329 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2b921c3f-0298-48a5-8020-2e7932ce381a-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-lvft9\" (UID: \"2b921c3f-0298-48a5-8020-2e7932ce381a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-lvft9" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.631397 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-host-slash\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.631420 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-var-lib-openvswitch\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.631447 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2b921c3f-0298-48a5-8020-2e7932ce381a-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-lvft9\" (UID: \"2b921c3f-0298-48a5-8020-2e7932ce381a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-lvft9" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.631489 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-host-cni-netd\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.631513 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f8707e23-b20a-4547-938b-1938b7cd5b7d-ovnkube-script-lib\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.631531 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2b921c3f-0298-48a5-8020-2e7932ce381a-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-lvft9\" (UID: \"2b921c3f-0298-48a5-8020-2e7932ce381a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-lvft9" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.631571 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-run-systemd\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.631719 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9lt4m\" (UniqueName: \"kubernetes.io/projected/cdb50da0-eb06-4959-b8da-70919924f77e-kube-api-access-9lt4m\") on node \"crc\" DevicePath \"\"" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.631743 5120 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cdb50da0-eb06-4959-b8da-70919924f77e-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.631756 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zdzrm\" (UniqueName: \"kubernetes.io/projected/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-kube-api-access-zdzrm\") on node \"crc\" DevicePath \"\"" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.631768 5120 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.631798 5120 reconciler_common.go:299] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.733050 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-host-run-ovn-kubernetes\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.733119 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f8707e23-b20a-4547-938b-1938b7cd5b7d-ovn-node-metrics-cert\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.733142 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-host-run-netns\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.733181 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-host-run-netns\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.733181 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-host-run-ovn-kubernetes\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.733217 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2b921c3f-0298-48a5-8020-2e7932ce381a-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-lvft9\" (UID: \"2b921c3f-0298-48a5-8020-2e7932ce381a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-lvft9" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.733238 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-host-slash\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.733253 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-var-lib-openvswitch\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.733272 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2b921c3f-0298-48a5-8020-2e7932ce381a-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-lvft9\" (UID: \"2b921c3f-0298-48a5-8020-2e7932ce381a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-lvft9" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.733293 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-host-cni-netd\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.733307 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f8707e23-b20a-4547-938b-1938b7cd5b7d-ovnkube-script-lib\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.733316 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-host-slash\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.733326 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2b921c3f-0298-48a5-8020-2e7932ce381a-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-lvft9\" (UID: \"2b921c3f-0298-48a5-8020-2e7932ce381a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-lvft9" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.733346 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-run-systemd\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.733364 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-host-kubelet\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.733384 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-etc-openvswitch\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.733419 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-host-cni-bin\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.733438 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-run-openvswitch\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.733454 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-run-ovn\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.733476 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f8707e23-b20a-4547-938b-1938b7cd5b7d-ovnkube-config\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.733492 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f8707e23-b20a-4547-938b-1938b7cd5b7d-env-overrides\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.733515 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-log-socket\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.733534 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.733572 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-node-log\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.733608 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jd7kb\" (UniqueName: \"kubernetes.io/projected/2b921c3f-0298-48a5-8020-2e7932ce381a-kube-api-access-jd7kb\") pod \"ovnkube-control-plane-97c9b6c48-lvft9\" (UID: \"2b921c3f-0298-48a5-8020-2e7932ce381a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-lvft9" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.733629 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-systemd-units\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.733645 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ck8wr\" (UniqueName: \"kubernetes.io/projected/f8707e23-b20a-4547-938b-1938b7cd5b7d-kube-api-access-ck8wr\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.733881 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2b921c3f-0298-48a5-8020-2e7932ce381a-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-lvft9\" (UID: \"2b921c3f-0298-48a5-8020-2e7932ce381a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-lvft9" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.733921 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-host-cni-bin\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.733947 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-var-lib-openvswitch\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.733989 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-log-socket\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.734037 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-run-openvswitch\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.734072 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-run-ovn\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.734090 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.734129 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-node-log\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.734346 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-systemd-units\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.734485 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-host-cni-netd\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.734642 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-run-systemd\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.734875 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-host-kubelet\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.734901 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-etc-openvswitch\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.734935 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f8707e23-b20a-4547-938b-1938b7cd5b7d-ovnkube-config\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.734937 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f8707e23-b20a-4547-938b-1938b7cd5b7d-env-overrides\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.734960 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2b921c3f-0298-48a5-8020-2e7932ce381a-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-lvft9\" (UID: \"2b921c3f-0298-48a5-8020-2e7932ce381a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-lvft9" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.735669 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f8707e23-b20a-4547-938b-1938b7cd5b7d-ovnkube-script-lib\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.741723 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2b921c3f-0298-48a5-8020-2e7932ce381a-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-lvft9\" (UID: \"2b921c3f-0298-48a5-8020-2e7932ce381a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-lvft9" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.742617 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f8707e23-b20a-4547-938b-1938b7cd5b7d-ovn-node-metrics-cert\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.759555 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jd7kb\" (UniqueName: \"kubernetes.io/projected/2b921c3f-0298-48a5-8020-2e7932ce381a-kube-api-access-jd7kb\") pod \"ovnkube-control-plane-97c9b6c48-lvft9\" (UID: \"2b921c3f-0298-48a5-8020-2e7932ce381a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-lvft9" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.766841 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ck8wr\" (UniqueName: \"kubernetes.io/projected/f8707e23-b20a-4547-938b-1938b7cd5b7d-kube-api-access-ck8wr\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.925373 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.929787 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4lzht_67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087/kube-multus/0.log" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.929949 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4lzht" event={"ID":"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087","Type":"ContainerStarted","Data":"6d1ed07fd41158a3e43ec2ad9f9b07ddffc584f50ca4bb7898e60f5cccb1dffa"} Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.932629 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79" event={"ID":"cdb50da0-eb06-4959-b8da-70919924f77e","Type":"ContainerDied","Data":"20963fbe51218d226586341531cebabcba165784d34f9b709674547be7d8df72"} Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.932707 5120 scope.go:117] "RemoveContainer" containerID="53d59b7d2c319aaf356a45432146f39c690dafb55e7dcf1cae4ae5ee99919935" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.933148 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-lvft9" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.933918 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.938771 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2mf7v_dd62bdde-a6c1-42b3-9585-ba64c63cbb51/ovn-acl-logging/0.log" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.939425 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2mf7v_dd62bdde-a6c1-42b3-9585-ba64c63cbb51/ovn-controller/0.log" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.939826 5120 generic.go:358] "Generic (PLEG): container finished" podID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerID="a52fe62265acc53f59227988efecf2209707222abdac4d713d0a858d3eeb31cf" exitCode=0 Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.939928 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" event={"ID":"dd62bdde-a6c1-42b3-9585-ba64c63cbb51","Type":"ContainerDied","Data":"a52fe62265acc53f59227988efecf2209707222abdac4d713d0a858d3eeb31cf"} Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.939990 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" event={"ID":"dd62bdde-a6c1-42b3-9585-ba64c63cbb51","Type":"ContainerDied","Data":"948f3922f0403f01af9c080b4700105b9cfcfffd97d2155e3cc2c89092d9038d"} Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.940230 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.004040 5120 scope.go:117] "RemoveContainer" containerID="b21acaba3cb296157d5914b47ec901abef4ecd818f666b1cfb316d247e9b6411" Jan 22 11:59:03 crc kubenswrapper[5120]: W0122 11:59:03.006706 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b921c3f_0298_48a5_8020_2e7932ce381a.slice/crio-7c1049094d4ba6aa9fefeecfbea8f552b69cabdd477a505997ca758580406434 WatchSource:0}: Error finding container 7c1049094d4ba6aa9fefeecfbea8f552b69cabdd477a505997ca758580406434: Status 404 returned error can't find the container with id 7c1049094d4ba6aa9fefeecfbea8f552b69cabdd477a505997ca758580406434 Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.023227 5120 scope.go:117] "RemoveContainer" containerID="29c29478ae7505ea16587db05884339bd9c66ee1da87d8da71e4d78fa0821e42" Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.024739 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79"] Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.028531 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79"] Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.057335 5120 scope.go:117] "RemoveContainer" containerID="3f54e9ea68daffd338ce4d1b48fc95b48c8f4454371da3d34787786d2ec02aac" Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.071383 5120 scope.go:117] "RemoveContainer" containerID="fa6924cab3fb62a3d082f9ba370a96e5e7ab2d47c44c268324b727cb6cfbcd31" Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.083303 5120 scope.go:117] "RemoveContainer" containerID="a52fe62265acc53f59227988efecf2209707222abdac4d713d0a858d3eeb31cf" Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.097256 5120 scope.go:117] "RemoveContainer" containerID="f092db392417f256b4f0135f1ff3ff3d4129b64b53982c580d3655bc52b38860" Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.110936 5120 scope.go:117] "RemoveContainer" containerID="e6f598572d7ee3f4456ac54c210e204149f4a9ec71c387867d3b396283eafec7" Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.119370 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-2mf7v"] Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.119413 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-2mf7v"] Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.128004 5120 scope.go:117] "RemoveContainer" containerID="1c8b54f45344390a57a15807f13fc415b25522bda483800e1e6b4e1a80d11f4f" Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.146404 5120 scope.go:117] "RemoveContainer" containerID="bb9a1f9ecf9941c93d405464147ed7fce485a179d00bfa3094934d0400409f25" Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.173330 5120 scope.go:117] "RemoveContainer" containerID="3779fe53a1bd1ecb3df812f8ab103a8b1e9c3b1c7d9ac86e1b961d20be69d356" Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.194392 5120 scope.go:117] "RemoveContainer" containerID="29c29478ae7505ea16587db05884339bd9c66ee1da87d8da71e4d78fa0821e42" Jan 22 11:59:03 crc kubenswrapper[5120]: E0122 11:59:03.195290 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29c29478ae7505ea16587db05884339bd9c66ee1da87d8da71e4d78fa0821e42\": container with ID starting with 29c29478ae7505ea16587db05884339bd9c66ee1da87d8da71e4d78fa0821e42 not found: ID does not exist" containerID="29c29478ae7505ea16587db05884339bd9c66ee1da87d8da71e4d78fa0821e42" Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.195352 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29c29478ae7505ea16587db05884339bd9c66ee1da87d8da71e4d78fa0821e42"} err="failed to get container status \"29c29478ae7505ea16587db05884339bd9c66ee1da87d8da71e4d78fa0821e42\": rpc error: code = NotFound desc = could not find container \"29c29478ae7505ea16587db05884339bd9c66ee1da87d8da71e4d78fa0821e42\": container with ID starting with 29c29478ae7505ea16587db05884339bd9c66ee1da87d8da71e4d78fa0821e42 not found: ID does not exist" Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.195382 5120 scope.go:117] "RemoveContainer" containerID="3f54e9ea68daffd338ce4d1b48fc95b48c8f4454371da3d34787786d2ec02aac" Jan 22 11:59:03 crc kubenswrapper[5120]: E0122 11:59:03.196425 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f54e9ea68daffd338ce4d1b48fc95b48c8f4454371da3d34787786d2ec02aac\": container with ID starting with 3f54e9ea68daffd338ce4d1b48fc95b48c8f4454371da3d34787786d2ec02aac not found: ID does not exist" containerID="3f54e9ea68daffd338ce4d1b48fc95b48c8f4454371da3d34787786d2ec02aac" Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.196465 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f54e9ea68daffd338ce4d1b48fc95b48c8f4454371da3d34787786d2ec02aac"} err="failed to get container status \"3f54e9ea68daffd338ce4d1b48fc95b48c8f4454371da3d34787786d2ec02aac\": rpc error: code = NotFound desc = could not find container \"3f54e9ea68daffd338ce4d1b48fc95b48c8f4454371da3d34787786d2ec02aac\": container with ID starting with 3f54e9ea68daffd338ce4d1b48fc95b48c8f4454371da3d34787786d2ec02aac not found: ID does not exist" Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.196485 5120 scope.go:117] "RemoveContainer" containerID="fa6924cab3fb62a3d082f9ba370a96e5e7ab2d47c44c268324b727cb6cfbcd31" Jan 22 11:59:03 crc kubenswrapper[5120]: E0122 11:59:03.196830 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa6924cab3fb62a3d082f9ba370a96e5e7ab2d47c44c268324b727cb6cfbcd31\": container with ID starting with fa6924cab3fb62a3d082f9ba370a96e5e7ab2d47c44c268324b727cb6cfbcd31 not found: ID does not exist" containerID="fa6924cab3fb62a3d082f9ba370a96e5e7ab2d47c44c268324b727cb6cfbcd31" Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.196879 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa6924cab3fb62a3d082f9ba370a96e5e7ab2d47c44c268324b727cb6cfbcd31"} err="failed to get container status \"fa6924cab3fb62a3d082f9ba370a96e5e7ab2d47c44c268324b727cb6cfbcd31\": rpc error: code = NotFound desc = could not find container \"fa6924cab3fb62a3d082f9ba370a96e5e7ab2d47c44c268324b727cb6cfbcd31\": container with ID starting with fa6924cab3fb62a3d082f9ba370a96e5e7ab2d47c44c268324b727cb6cfbcd31 not found: ID does not exist" Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.196909 5120 scope.go:117] "RemoveContainer" containerID="a52fe62265acc53f59227988efecf2209707222abdac4d713d0a858d3eeb31cf" Jan 22 11:59:03 crc kubenswrapper[5120]: E0122 11:59:03.197412 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a52fe62265acc53f59227988efecf2209707222abdac4d713d0a858d3eeb31cf\": container with ID starting with a52fe62265acc53f59227988efecf2209707222abdac4d713d0a858d3eeb31cf not found: ID does not exist" containerID="a52fe62265acc53f59227988efecf2209707222abdac4d713d0a858d3eeb31cf" Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.197468 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a52fe62265acc53f59227988efecf2209707222abdac4d713d0a858d3eeb31cf"} err="failed to get container status \"a52fe62265acc53f59227988efecf2209707222abdac4d713d0a858d3eeb31cf\": rpc error: code = NotFound desc = could not find container \"a52fe62265acc53f59227988efecf2209707222abdac4d713d0a858d3eeb31cf\": container with ID starting with a52fe62265acc53f59227988efecf2209707222abdac4d713d0a858d3eeb31cf not found: ID does not exist" Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.197489 5120 scope.go:117] "RemoveContainer" containerID="f092db392417f256b4f0135f1ff3ff3d4129b64b53982c580d3655bc52b38860" Jan 22 11:59:03 crc kubenswrapper[5120]: E0122 11:59:03.197763 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f092db392417f256b4f0135f1ff3ff3d4129b64b53982c580d3655bc52b38860\": container with ID starting with f092db392417f256b4f0135f1ff3ff3d4129b64b53982c580d3655bc52b38860 not found: ID does not exist" containerID="f092db392417f256b4f0135f1ff3ff3d4129b64b53982c580d3655bc52b38860" Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.197845 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f092db392417f256b4f0135f1ff3ff3d4129b64b53982c580d3655bc52b38860"} err="failed to get container status \"f092db392417f256b4f0135f1ff3ff3d4129b64b53982c580d3655bc52b38860\": rpc error: code = NotFound desc = could not find container \"f092db392417f256b4f0135f1ff3ff3d4129b64b53982c580d3655bc52b38860\": container with ID starting with f092db392417f256b4f0135f1ff3ff3d4129b64b53982c580d3655bc52b38860 not found: ID does not exist" Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.197909 5120 scope.go:117] "RemoveContainer" containerID="e6f598572d7ee3f4456ac54c210e204149f4a9ec71c387867d3b396283eafec7" Jan 22 11:59:03 crc kubenswrapper[5120]: E0122 11:59:03.198233 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6f598572d7ee3f4456ac54c210e204149f4a9ec71c387867d3b396283eafec7\": container with ID starting with e6f598572d7ee3f4456ac54c210e204149f4a9ec71c387867d3b396283eafec7 not found: ID does not exist" containerID="e6f598572d7ee3f4456ac54c210e204149f4a9ec71c387867d3b396283eafec7" Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.198268 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6f598572d7ee3f4456ac54c210e204149f4a9ec71c387867d3b396283eafec7"} err="failed to get container status \"e6f598572d7ee3f4456ac54c210e204149f4a9ec71c387867d3b396283eafec7\": rpc error: code = NotFound desc = could not find container \"e6f598572d7ee3f4456ac54c210e204149f4a9ec71c387867d3b396283eafec7\": container with ID starting with e6f598572d7ee3f4456ac54c210e204149f4a9ec71c387867d3b396283eafec7 not found: ID does not exist" Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.198288 5120 scope.go:117] "RemoveContainer" containerID="1c8b54f45344390a57a15807f13fc415b25522bda483800e1e6b4e1a80d11f4f" Jan 22 11:59:03 crc kubenswrapper[5120]: E0122 11:59:03.198638 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c8b54f45344390a57a15807f13fc415b25522bda483800e1e6b4e1a80d11f4f\": container with ID starting with 1c8b54f45344390a57a15807f13fc415b25522bda483800e1e6b4e1a80d11f4f not found: ID does not exist" containerID="1c8b54f45344390a57a15807f13fc415b25522bda483800e1e6b4e1a80d11f4f" Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.198669 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c8b54f45344390a57a15807f13fc415b25522bda483800e1e6b4e1a80d11f4f"} err="failed to get container status \"1c8b54f45344390a57a15807f13fc415b25522bda483800e1e6b4e1a80d11f4f\": rpc error: code = NotFound desc = could not find container \"1c8b54f45344390a57a15807f13fc415b25522bda483800e1e6b4e1a80d11f4f\": container with ID starting with 1c8b54f45344390a57a15807f13fc415b25522bda483800e1e6b4e1a80d11f4f not found: ID does not exist" Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.198683 5120 scope.go:117] "RemoveContainer" containerID="bb9a1f9ecf9941c93d405464147ed7fce485a179d00bfa3094934d0400409f25" Jan 22 11:59:03 crc kubenswrapper[5120]: E0122 11:59:03.198907 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb9a1f9ecf9941c93d405464147ed7fce485a179d00bfa3094934d0400409f25\": container with ID starting with bb9a1f9ecf9941c93d405464147ed7fce485a179d00bfa3094934d0400409f25 not found: ID does not exist" containerID="bb9a1f9ecf9941c93d405464147ed7fce485a179d00bfa3094934d0400409f25" Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.198974 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb9a1f9ecf9941c93d405464147ed7fce485a179d00bfa3094934d0400409f25"} err="failed to get container status \"bb9a1f9ecf9941c93d405464147ed7fce485a179d00bfa3094934d0400409f25\": rpc error: code = NotFound desc = could not find container \"bb9a1f9ecf9941c93d405464147ed7fce485a179d00bfa3094934d0400409f25\": container with ID starting with bb9a1f9ecf9941c93d405464147ed7fce485a179d00bfa3094934d0400409f25 not found: ID does not exist" Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.198994 5120 scope.go:117] "RemoveContainer" containerID="3779fe53a1bd1ecb3df812f8ab103a8b1e9c3b1c7d9ac86e1b961d20be69d356" Jan 22 11:59:03 crc kubenswrapper[5120]: E0122 11:59:03.199292 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3779fe53a1bd1ecb3df812f8ab103a8b1e9c3b1c7d9ac86e1b961d20be69d356\": container with ID starting with 3779fe53a1bd1ecb3df812f8ab103a8b1e9c3b1c7d9ac86e1b961d20be69d356 not found: ID does not exist" containerID="3779fe53a1bd1ecb3df812f8ab103a8b1e9c3b1c7d9ac86e1b961d20be69d356" Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.199329 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3779fe53a1bd1ecb3df812f8ab103a8b1e9c3b1c7d9ac86e1b961d20be69d356"} err="failed to get container status \"3779fe53a1bd1ecb3df812f8ab103a8b1e9c3b1c7d9ac86e1b961d20be69d356\": rpc error: code = NotFound desc = could not find container \"3779fe53a1bd1ecb3df812f8ab103a8b1e9c3b1c7d9ac86e1b961d20be69d356\": container with ID starting with 3779fe53a1bd1ecb3df812f8ab103a8b1e9c3b1c7d9ac86e1b961d20be69d356 not found: ID does not exist" Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.583848 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdb50da0-eb06-4959-b8da-70919924f77e" path="/var/lib/kubelet/pods/cdb50da0-eb06-4959-b8da-70919924f77e/volumes" Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.584486 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" path="/var/lib/kubelet/pods/dd62bdde-a6c1-42b3-9585-ba64c63cbb51/volumes" Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.953014 5120 generic.go:358] "Generic (PLEG): container finished" podID="f8707e23-b20a-4547-938b-1938b7cd5b7d" containerID="e0caf6d3b243b2fa89908211b540dda30bd6d0236528a194c92a37b33ff165ff" exitCode=0 Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.953061 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" event={"ID":"f8707e23-b20a-4547-938b-1938b7cd5b7d","Type":"ContainerDied","Data":"e0caf6d3b243b2fa89908211b540dda30bd6d0236528a194c92a37b33ff165ff"} Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.953140 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" event={"ID":"f8707e23-b20a-4547-938b-1938b7cd5b7d","Type":"ContainerStarted","Data":"f428aeccccecc03c7c096b1f1e17d299174a54c34bdac3db8c4a6dac0ba6fe50"} Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.955809 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-lvft9" event={"ID":"2b921c3f-0298-48a5-8020-2e7932ce381a","Type":"ContainerStarted","Data":"3a6cfabe288fa7b7228174bdc16aef8fe815b2268b6878d63decf8b6cb014b56"} Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.955898 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-lvft9" event={"ID":"2b921c3f-0298-48a5-8020-2e7932ce381a","Type":"ContainerStarted","Data":"36eacf406149b00c20107c824b86dcae1d9ff059fb4df9b04bef692ac0a22ec0"} Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.955925 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-lvft9" event={"ID":"2b921c3f-0298-48a5-8020-2e7932ce381a","Type":"ContainerStarted","Data":"7c1049094d4ba6aa9fefeecfbea8f552b69cabdd477a505997ca758580406434"} Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.999318 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-lvft9" podStartSLOduration=2.999301275 podStartE2EDuration="2.999301275s" podCreationTimestamp="2026-01-22 11:59:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:59:03.99702794 +0000 UTC m=+678.740976281" watchObservedRunningTime="2026-01-22 11:59:03.999301275 +0000 UTC m=+678.743249616" Jan 22 11:59:04 crc kubenswrapper[5120]: I0122 11:59:04.967216 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" event={"ID":"f8707e23-b20a-4547-938b-1938b7cd5b7d","Type":"ContainerStarted","Data":"5f054527e26d47e40b7a43de934cfc37cedf6605143dcc42603ff1b601db56a6"} Jan 22 11:59:04 crc kubenswrapper[5120]: I0122 11:59:04.967279 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" event={"ID":"f8707e23-b20a-4547-938b-1938b7cd5b7d","Type":"ContainerStarted","Data":"153387bf933f8c7599502d58d493aeb3e9ba0d9dbdf2d324a911d357c63600ad"} Jan 22 11:59:05 crc kubenswrapper[5120]: I0122 11:59:05.976253 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" event={"ID":"f8707e23-b20a-4547-938b-1938b7cd5b7d","Type":"ContainerStarted","Data":"6580c9e2a4b24fa001eba7200992a35a59c292453e9c13d305be3dd9994ce202"} Jan 22 11:59:05 crc kubenswrapper[5120]: I0122 11:59:05.976813 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" event={"ID":"f8707e23-b20a-4547-938b-1938b7cd5b7d","Type":"ContainerStarted","Data":"800fad52f15b7caf05dbe96e4ad2f4bb01ae5ea793cd18575f189b6b5e954311"} Jan 22 11:59:07 crc kubenswrapper[5120]: I0122 11:59:07.006981 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" event={"ID":"f8707e23-b20a-4547-938b-1938b7cd5b7d","Type":"ContainerStarted","Data":"c38d96fc1b850a9d79b7cd2331227d6f44907167a0f738fb38011fc8c35f768c"} Jan 22 11:59:07 crc kubenswrapper[5120]: I0122 11:59:07.007073 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" event={"ID":"f8707e23-b20a-4547-938b-1938b7cd5b7d","Type":"ContainerStarted","Data":"e990c7cef1b03bb5ecf36bd0de41772488021e3ffb1ea0fc469a3986800dba3e"} Jan 22 11:59:10 crc kubenswrapper[5120]: I0122 11:59:10.061601 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" event={"ID":"f8707e23-b20a-4547-938b-1938b7cd5b7d","Type":"ContainerStarted","Data":"d585533ddd988ab9c76820855f8f988de13240fa743a200b900978f79d19744e"} Jan 22 11:59:13 crc kubenswrapper[5120]: I0122 11:59:13.097378 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" event={"ID":"f8707e23-b20a-4547-938b-1938b7cd5b7d","Type":"ContainerStarted","Data":"97d0ab39537aca068ed5f7d070b34e9a0bb68c3a186b2abd75f6ce81d7d01f2f"} Jan 22 11:59:13 crc kubenswrapper[5120]: I0122 11:59:13.098201 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:13 crc kubenswrapper[5120]: I0122 11:59:13.098246 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:13 crc kubenswrapper[5120]: I0122 11:59:13.098273 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:13 crc kubenswrapper[5120]: I0122 11:59:13.134243 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:13 crc kubenswrapper[5120]: I0122 11:59:13.134361 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:13 crc kubenswrapper[5120]: I0122 11:59:13.154667 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" podStartSLOduration=11.154644104 podStartE2EDuration="11.154644104s" podCreationTimestamp="2026-01-22 11:59:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:59:13.151843626 +0000 UTC m=+687.895791977" watchObservedRunningTime="2026-01-22 11:59:13.154644104 +0000 UTC m=+687.898592445" Jan 22 11:59:32 crc kubenswrapper[5120]: I0122 11:59:32.094135 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:59:32 crc kubenswrapper[5120]: I0122 11:59:32.094926 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:59:32 crc kubenswrapper[5120]: I0122 11:59:32.095036 5120 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dq269" Jan 22 11:59:32 crc kubenswrapper[5120]: I0122 11:59:32.095897 5120 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bce4cc383007abddfe015e880c39e78b9257e350f68f93cf80d0801b94ef0ab7"} pod="openshift-machine-config-operator/machine-config-daemon-dq269" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 11:59:32 crc kubenswrapper[5120]: I0122 11:59:32.095973 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" containerID="cri-o://bce4cc383007abddfe015e880c39e78b9257e350f68f93cf80d0801b94ef0ab7" gracePeriod=600 Jan 22 11:59:33 crc kubenswrapper[5120]: I0122 11:59:33.118105 5120 generic.go:358] "Generic (PLEG): container finished" podID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerID="bce4cc383007abddfe015e880c39e78b9257e350f68f93cf80d0801b94ef0ab7" exitCode=0 Jan 22 11:59:33 crc kubenswrapper[5120]: I0122 11:59:33.118233 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" event={"ID":"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9","Type":"ContainerDied","Data":"bce4cc383007abddfe015e880c39e78b9257e350f68f93cf80d0801b94ef0ab7"} Jan 22 11:59:33 crc kubenswrapper[5120]: I0122 11:59:33.119098 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" event={"ID":"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9","Type":"ContainerStarted","Data":"7b1b1dbcaf6053c4f4e587f597b1d0bcb38e183b1d64f8acf48abb200ec2450a"} Jan 22 11:59:33 crc kubenswrapper[5120]: I0122 11:59:33.119132 5120 scope.go:117] "RemoveContainer" containerID="e857eb1297fb678314f51a1be1533aaadb53a0e5183e6c42cc64ea1b07667a10" Jan 22 11:59:45 crc kubenswrapper[5120]: I0122 11:59:45.149655 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 12:00:00 crc kubenswrapper[5120]: I0122 12:00:00.140415 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29484720-f92nq"] Jan 22 12:00:00 crc kubenswrapper[5120]: I0122 12:00:00.158501 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484720-bt5vq"] Jan 22 12:00:00 crc kubenswrapper[5120]: I0122 12:00:00.172534 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484720-f92nq"] Jan 22 12:00:00 crc kubenswrapper[5120]: I0122 12:00:00.172581 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484720-bt5vq"] Jan 22 12:00:00 crc kubenswrapper[5120]: I0122 12:00:00.172832 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484720-bt5vq" Jan 22 12:00:00 crc kubenswrapper[5120]: I0122 12:00:00.173571 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484720-f92nq" Jan 22 12:00:00 crc kubenswrapper[5120]: I0122 12:00:00.175157 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 22 12:00:00 crc kubenswrapper[5120]: I0122 12:00:00.175167 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 22 12:00:00 crc kubenswrapper[5120]: I0122 12:00:00.176141 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g2chw\"" Jan 22 12:00:00 crc kubenswrapper[5120]: I0122 12:00:00.176298 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 22 12:00:00 crc kubenswrapper[5120]: I0122 12:00:00.176181 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 22 12:00:00 crc kubenswrapper[5120]: I0122 12:00:00.225143 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d57ca8ee-4b8e-4b45-983a-11332a457cf8-config-volume\") pod \"collect-profiles-29484720-bt5vq\" (UID: \"d57ca8ee-4b8e-4b45-983a-11332a457cf8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484720-bt5vq" Jan 22 12:00:00 crc kubenswrapper[5120]: I0122 12:00:00.225213 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zq25\" (UniqueName: \"kubernetes.io/projected/ee0a1780-1d96-46a3-8386-55404b6d1299-kube-api-access-6zq25\") pod \"auto-csr-approver-29484720-f92nq\" (UID: \"ee0a1780-1d96-46a3-8386-55404b6d1299\") " pod="openshift-infra/auto-csr-approver-29484720-f92nq" Jan 22 12:00:00 crc kubenswrapper[5120]: I0122 12:00:00.225363 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sjkb\" (UniqueName: \"kubernetes.io/projected/d57ca8ee-4b8e-4b45-983a-11332a457cf8-kube-api-access-7sjkb\") pod \"collect-profiles-29484720-bt5vq\" (UID: \"d57ca8ee-4b8e-4b45-983a-11332a457cf8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484720-bt5vq" Jan 22 12:00:00 crc kubenswrapper[5120]: I0122 12:00:00.225537 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d57ca8ee-4b8e-4b45-983a-11332a457cf8-secret-volume\") pod \"collect-profiles-29484720-bt5vq\" (UID: \"d57ca8ee-4b8e-4b45-983a-11332a457cf8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484720-bt5vq" Jan 22 12:00:00 crc kubenswrapper[5120]: I0122 12:00:00.327138 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7sjkb\" (UniqueName: \"kubernetes.io/projected/d57ca8ee-4b8e-4b45-983a-11332a457cf8-kube-api-access-7sjkb\") pod \"collect-profiles-29484720-bt5vq\" (UID: \"d57ca8ee-4b8e-4b45-983a-11332a457cf8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484720-bt5vq" Jan 22 12:00:00 crc kubenswrapper[5120]: I0122 12:00:00.327455 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d57ca8ee-4b8e-4b45-983a-11332a457cf8-secret-volume\") pod \"collect-profiles-29484720-bt5vq\" (UID: \"d57ca8ee-4b8e-4b45-983a-11332a457cf8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484720-bt5vq" Jan 22 12:00:00 crc kubenswrapper[5120]: I0122 12:00:00.327569 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d57ca8ee-4b8e-4b45-983a-11332a457cf8-config-volume\") pod \"collect-profiles-29484720-bt5vq\" (UID: \"d57ca8ee-4b8e-4b45-983a-11332a457cf8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484720-bt5vq" Jan 22 12:00:00 crc kubenswrapper[5120]: I0122 12:00:00.327672 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6zq25\" (UniqueName: \"kubernetes.io/projected/ee0a1780-1d96-46a3-8386-55404b6d1299-kube-api-access-6zq25\") pod \"auto-csr-approver-29484720-f92nq\" (UID: \"ee0a1780-1d96-46a3-8386-55404b6d1299\") " pod="openshift-infra/auto-csr-approver-29484720-f92nq" Jan 22 12:00:00 crc kubenswrapper[5120]: I0122 12:00:00.328618 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d57ca8ee-4b8e-4b45-983a-11332a457cf8-config-volume\") pod \"collect-profiles-29484720-bt5vq\" (UID: \"d57ca8ee-4b8e-4b45-983a-11332a457cf8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484720-bt5vq" Jan 22 12:00:00 crc kubenswrapper[5120]: I0122 12:00:00.337638 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d57ca8ee-4b8e-4b45-983a-11332a457cf8-secret-volume\") pod \"collect-profiles-29484720-bt5vq\" (UID: \"d57ca8ee-4b8e-4b45-983a-11332a457cf8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484720-bt5vq" Jan 22 12:00:00 crc kubenswrapper[5120]: I0122 12:00:00.347522 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zq25\" (UniqueName: \"kubernetes.io/projected/ee0a1780-1d96-46a3-8386-55404b6d1299-kube-api-access-6zq25\") pod \"auto-csr-approver-29484720-f92nq\" (UID: \"ee0a1780-1d96-46a3-8386-55404b6d1299\") " pod="openshift-infra/auto-csr-approver-29484720-f92nq" Jan 22 12:00:00 crc kubenswrapper[5120]: I0122 12:00:00.349216 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sjkb\" (UniqueName: \"kubernetes.io/projected/d57ca8ee-4b8e-4b45-983a-11332a457cf8-kube-api-access-7sjkb\") pod \"collect-profiles-29484720-bt5vq\" (UID: \"d57ca8ee-4b8e-4b45-983a-11332a457cf8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484720-bt5vq" Jan 22 12:00:00 crc kubenswrapper[5120]: I0122 12:00:00.503208 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484720-bt5vq" Jan 22 12:00:00 crc kubenswrapper[5120]: I0122 12:00:00.513584 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484720-f92nq" Jan 22 12:00:00 crc kubenswrapper[5120]: I0122 12:00:00.733391 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484720-f92nq"] Jan 22 12:00:00 crc kubenswrapper[5120]: I0122 12:00:00.783848 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484720-bt5vq"] Jan 22 12:00:00 crc kubenswrapper[5120]: W0122 12:00:00.790850 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd57ca8ee_4b8e_4b45_983a_11332a457cf8.slice/crio-555a31de425b735c917501ecc82650fcb3d09292cc7728c2732b97d1376c6b2d WatchSource:0}: Error finding container 555a31de425b735c917501ecc82650fcb3d09292cc7728c2732b97d1376c6b2d: Status 404 returned error can't find the container with id 555a31de425b735c917501ecc82650fcb3d09292cc7728c2732b97d1376c6b2d Jan 22 12:00:01 crc kubenswrapper[5120]: I0122 12:00:01.340223 5120 generic.go:358] "Generic (PLEG): container finished" podID="d57ca8ee-4b8e-4b45-983a-11332a457cf8" containerID="73df242a325822ccf1cead216fb72d99d7eb4b7f40cfe98bdeb214c25306e468" exitCode=0 Jan 22 12:00:01 crc kubenswrapper[5120]: I0122 12:00:01.340835 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484720-bt5vq" event={"ID":"d57ca8ee-4b8e-4b45-983a-11332a457cf8","Type":"ContainerDied","Data":"73df242a325822ccf1cead216fb72d99d7eb4b7f40cfe98bdeb214c25306e468"} Jan 22 12:00:01 crc kubenswrapper[5120]: I0122 12:00:01.340870 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484720-bt5vq" event={"ID":"d57ca8ee-4b8e-4b45-983a-11332a457cf8","Type":"ContainerStarted","Data":"555a31de425b735c917501ecc82650fcb3d09292cc7728c2732b97d1376c6b2d"} Jan 22 12:00:01 crc kubenswrapper[5120]: I0122 12:00:01.342344 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484720-f92nq" event={"ID":"ee0a1780-1d96-46a3-8386-55404b6d1299","Type":"ContainerStarted","Data":"1eb12823267a042c8a078657072b8ba02586a08840e4a77c20ef76a66c21b12d"} Jan 22 12:00:02 crc kubenswrapper[5120]: I0122 12:00:02.543518 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484720-bt5vq" Jan 22 12:00:02 crc kubenswrapper[5120]: I0122 12:00:02.657571 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d57ca8ee-4b8e-4b45-983a-11332a457cf8-config-volume\") pod \"d57ca8ee-4b8e-4b45-983a-11332a457cf8\" (UID: \"d57ca8ee-4b8e-4b45-983a-11332a457cf8\") " Jan 22 12:00:02 crc kubenswrapper[5120]: I0122 12:00:02.657634 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7sjkb\" (UniqueName: \"kubernetes.io/projected/d57ca8ee-4b8e-4b45-983a-11332a457cf8-kube-api-access-7sjkb\") pod \"d57ca8ee-4b8e-4b45-983a-11332a457cf8\" (UID: \"d57ca8ee-4b8e-4b45-983a-11332a457cf8\") " Jan 22 12:00:02 crc kubenswrapper[5120]: I0122 12:00:02.657727 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d57ca8ee-4b8e-4b45-983a-11332a457cf8-secret-volume\") pod \"d57ca8ee-4b8e-4b45-983a-11332a457cf8\" (UID: \"d57ca8ee-4b8e-4b45-983a-11332a457cf8\") " Jan 22 12:00:02 crc kubenswrapper[5120]: I0122 12:00:02.658709 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d57ca8ee-4b8e-4b45-983a-11332a457cf8-config-volume" (OuterVolumeSpecName: "config-volume") pod "d57ca8ee-4b8e-4b45-983a-11332a457cf8" (UID: "d57ca8ee-4b8e-4b45-983a-11332a457cf8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:00:02 crc kubenswrapper[5120]: I0122 12:00:02.663985 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d57ca8ee-4b8e-4b45-983a-11332a457cf8-kube-api-access-7sjkb" (OuterVolumeSpecName: "kube-api-access-7sjkb") pod "d57ca8ee-4b8e-4b45-983a-11332a457cf8" (UID: "d57ca8ee-4b8e-4b45-983a-11332a457cf8"). InnerVolumeSpecName "kube-api-access-7sjkb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:00:02 crc kubenswrapper[5120]: I0122 12:00:02.664607 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d57ca8ee-4b8e-4b45-983a-11332a457cf8-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d57ca8ee-4b8e-4b45-983a-11332a457cf8" (UID: "d57ca8ee-4b8e-4b45-983a-11332a457cf8"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 12:00:02 crc kubenswrapper[5120]: I0122 12:00:02.759226 5120 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d57ca8ee-4b8e-4b45-983a-11332a457cf8-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 22 12:00:02 crc kubenswrapper[5120]: I0122 12:00:02.759265 5120 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d57ca8ee-4b8e-4b45-983a-11332a457cf8-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 12:00:02 crc kubenswrapper[5120]: I0122 12:00:02.759277 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7sjkb\" (UniqueName: \"kubernetes.io/projected/d57ca8ee-4b8e-4b45-983a-11332a457cf8-kube-api-access-7sjkb\") on node \"crc\" DevicePath \"\"" Jan 22 12:00:03 crc kubenswrapper[5120]: I0122 12:00:03.359836 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484720-bt5vq" Jan 22 12:00:03 crc kubenswrapper[5120]: I0122 12:00:03.359886 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484720-bt5vq" event={"ID":"d57ca8ee-4b8e-4b45-983a-11332a457cf8","Type":"ContainerDied","Data":"555a31de425b735c917501ecc82650fcb3d09292cc7728c2732b97d1376c6b2d"} Jan 22 12:00:03 crc kubenswrapper[5120]: I0122 12:00:03.360638 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="555a31de425b735c917501ecc82650fcb3d09292cc7728c2732b97d1376c6b2d" Jan 22 12:00:10 crc kubenswrapper[5120]: I0122 12:00:10.955449 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pn4sg"] Jan 22 12:00:10 crc kubenswrapper[5120]: I0122 12:00:10.956377 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pn4sg" podUID="db99c964-abd0-4bc6-a71a-79a9c5a3c718" containerName="registry-server" containerID="cri-o://8b163792bb97360e66ff49a6671a168e8360ed01068a2e1a81223660edca82ce" gracePeriod=30 Jan 22 12:00:11 crc kubenswrapper[5120]: I0122 12:00:11.303311 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pn4sg" Jan 22 12:00:11 crc kubenswrapper[5120]: I0122 12:00:11.408369 5120 generic.go:358] "Generic (PLEG): container finished" podID="db99c964-abd0-4bc6-a71a-79a9c5a3c718" containerID="8b163792bb97360e66ff49a6671a168e8360ed01068a2e1a81223660edca82ce" exitCode=0 Jan 22 12:00:11 crc kubenswrapper[5120]: I0122 12:00:11.408501 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pn4sg" Jan 22 12:00:11 crc kubenswrapper[5120]: I0122 12:00:11.408657 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pn4sg" event={"ID":"db99c964-abd0-4bc6-a71a-79a9c5a3c718","Type":"ContainerDied","Data":"8b163792bb97360e66ff49a6671a168e8360ed01068a2e1a81223660edca82ce"} Jan 22 12:00:11 crc kubenswrapper[5120]: I0122 12:00:11.408706 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pn4sg" event={"ID":"db99c964-abd0-4bc6-a71a-79a9c5a3c718","Type":"ContainerDied","Data":"be77ef2cfeb1733dbed252c7c38f2239d4e5745805f1f6b72bcb11727aa3ba6e"} Jan 22 12:00:11 crc kubenswrapper[5120]: I0122 12:00:11.408729 5120 scope.go:117] "RemoveContainer" containerID="8b163792bb97360e66ff49a6671a168e8360ed01068a2e1a81223660edca82ce" Jan 22 12:00:11 crc kubenswrapper[5120]: I0122 12:00:11.429089 5120 scope.go:117] "RemoveContainer" containerID="313d44d3fc66f67b7d63b858b58681ab05c602e2795d9b9acc7c77eaa45c2996" Jan 22 12:00:11 crc kubenswrapper[5120]: I0122 12:00:11.447903 5120 scope.go:117] "RemoveContainer" containerID="23305ca08eff0d7027d5b25fdf18268d3a1bc74ff0ad9a6abad880b0f080c4ea" Jan 22 12:00:11 crc kubenswrapper[5120]: I0122 12:00:11.478852 5120 scope.go:117] "RemoveContainer" containerID="8b163792bb97360e66ff49a6671a168e8360ed01068a2e1a81223660edca82ce" Jan 22 12:00:11 crc kubenswrapper[5120]: E0122 12:00:11.479502 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b163792bb97360e66ff49a6671a168e8360ed01068a2e1a81223660edca82ce\": container with ID starting with 8b163792bb97360e66ff49a6671a168e8360ed01068a2e1a81223660edca82ce not found: ID does not exist" containerID="8b163792bb97360e66ff49a6671a168e8360ed01068a2e1a81223660edca82ce" Jan 22 12:00:11 crc kubenswrapper[5120]: I0122 12:00:11.479718 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b163792bb97360e66ff49a6671a168e8360ed01068a2e1a81223660edca82ce"} err="failed to get container status \"8b163792bb97360e66ff49a6671a168e8360ed01068a2e1a81223660edca82ce\": rpc error: code = NotFound desc = could not find container \"8b163792bb97360e66ff49a6671a168e8360ed01068a2e1a81223660edca82ce\": container with ID starting with 8b163792bb97360e66ff49a6671a168e8360ed01068a2e1a81223660edca82ce not found: ID does not exist" Jan 22 12:00:11 crc kubenswrapper[5120]: I0122 12:00:11.479863 5120 scope.go:117] "RemoveContainer" containerID="313d44d3fc66f67b7d63b858b58681ab05c602e2795d9b9acc7c77eaa45c2996" Jan 22 12:00:11 crc kubenswrapper[5120]: E0122 12:00:11.480641 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"313d44d3fc66f67b7d63b858b58681ab05c602e2795d9b9acc7c77eaa45c2996\": container with ID starting with 313d44d3fc66f67b7d63b858b58681ab05c602e2795d9b9acc7c77eaa45c2996 not found: ID does not exist" containerID="313d44d3fc66f67b7d63b858b58681ab05c602e2795d9b9acc7c77eaa45c2996" Jan 22 12:00:11 crc kubenswrapper[5120]: I0122 12:00:11.480705 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"313d44d3fc66f67b7d63b858b58681ab05c602e2795d9b9acc7c77eaa45c2996"} err="failed to get container status \"313d44d3fc66f67b7d63b858b58681ab05c602e2795d9b9acc7c77eaa45c2996\": rpc error: code = NotFound desc = could not find container \"313d44d3fc66f67b7d63b858b58681ab05c602e2795d9b9acc7c77eaa45c2996\": container with ID starting with 313d44d3fc66f67b7d63b858b58681ab05c602e2795d9b9acc7c77eaa45c2996 not found: ID does not exist" Jan 22 12:00:11 crc kubenswrapper[5120]: I0122 12:00:11.480742 5120 scope.go:117] "RemoveContainer" containerID="23305ca08eff0d7027d5b25fdf18268d3a1bc74ff0ad9a6abad880b0f080c4ea" Jan 22 12:00:11 crc kubenswrapper[5120]: E0122 12:00:11.481333 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23305ca08eff0d7027d5b25fdf18268d3a1bc74ff0ad9a6abad880b0f080c4ea\": container with ID starting with 23305ca08eff0d7027d5b25fdf18268d3a1bc74ff0ad9a6abad880b0f080c4ea not found: ID does not exist" containerID="23305ca08eff0d7027d5b25fdf18268d3a1bc74ff0ad9a6abad880b0f080c4ea" Jan 22 12:00:11 crc kubenswrapper[5120]: I0122 12:00:11.481520 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23305ca08eff0d7027d5b25fdf18268d3a1bc74ff0ad9a6abad880b0f080c4ea"} err="failed to get container status \"23305ca08eff0d7027d5b25fdf18268d3a1bc74ff0ad9a6abad880b0f080c4ea\": rpc error: code = NotFound desc = could not find container \"23305ca08eff0d7027d5b25fdf18268d3a1bc74ff0ad9a6abad880b0f080c4ea\": container with ID starting with 23305ca08eff0d7027d5b25fdf18268d3a1bc74ff0ad9a6abad880b0f080c4ea not found: ID does not exist" Jan 22 12:00:11 crc kubenswrapper[5120]: I0122 12:00:11.496783 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db99c964-abd0-4bc6-a71a-79a9c5a3c718-utilities\") pod \"db99c964-abd0-4bc6-a71a-79a9c5a3c718\" (UID: \"db99c964-abd0-4bc6-a71a-79a9c5a3c718\") " Jan 22 12:00:11 crc kubenswrapper[5120]: I0122 12:00:11.497022 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db99c964-abd0-4bc6-a71a-79a9c5a3c718-catalog-content\") pod \"db99c964-abd0-4bc6-a71a-79a9c5a3c718\" (UID: \"db99c964-abd0-4bc6-a71a-79a9c5a3c718\") " Jan 22 12:00:11 crc kubenswrapper[5120]: I0122 12:00:11.497184 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfmsb\" (UniqueName: \"kubernetes.io/projected/db99c964-abd0-4bc6-a71a-79a9c5a3c718-kube-api-access-qfmsb\") pod \"db99c964-abd0-4bc6-a71a-79a9c5a3c718\" (UID: \"db99c964-abd0-4bc6-a71a-79a9c5a3c718\") " Jan 22 12:00:11 crc kubenswrapper[5120]: I0122 12:00:11.498523 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db99c964-abd0-4bc6-a71a-79a9c5a3c718-utilities" (OuterVolumeSpecName: "utilities") pod "db99c964-abd0-4bc6-a71a-79a9c5a3c718" (UID: "db99c964-abd0-4bc6-a71a-79a9c5a3c718"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:00:11 crc kubenswrapper[5120]: I0122 12:00:11.503741 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db99c964-abd0-4bc6-a71a-79a9c5a3c718-kube-api-access-qfmsb" (OuterVolumeSpecName: "kube-api-access-qfmsb") pod "db99c964-abd0-4bc6-a71a-79a9c5a3c718" (UID: "db99c964-abd0-4bc6-a71a-79a9c5a3c718"). InnerVolumeSpecName "kube-api-access-qfmsb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:00:11 crc kubenswrapper[5120]: I0122 12:00:11.511792 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db99c964-abd0-4bc6-a71a-79a9c5a3c718-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "db99c964-abd0-4bc6-a71a-79a9c5a3c718" (UID: "db99c964-abd0-4bc6-a71a-79a9c5a3c718"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:00:11 crc kubenswrapper[5120]: I0122 12:00:11.599343 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db99c964-abd0-4bc6-a71a-79a9c5a3c718-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 12:00:11 crc kubenswrapper[5120]: I0122 12:00:11.599418 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db99c964-abd0-4bc6-a71a-79a9c5a3c718-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 12:00:11 crc kubenswrapper[5120]: I0122 12:00:11.599435 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qfmsb\" (UniqueName: \"kubernetes.io/projected/db99c964-abd0-4bc6-a71a-79a9c5a3c718-kube-api-access-qfmsb\") on node \"crc\" DevicePath \"\"" Jan 22 12:00:11 crc kubenswrapper[5120]: I0122 12:00:11.732698 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pn4sg"] Jan 22 12:00:11 crc kubenswrapper[5120]: I0122 12:00:11.735864 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pn4sg"] Jan 22 12:00:13 crc kubenswrapper[5120]: I0122 12:00:13.578691 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db99c964-abd0-4bc6-a71a-79a9c5a3c718" path="/var/lib/kubelet/pods/db99c964-abd0-4bc6-a71a-79a9c5a3c718/volumes" Jan 22 12:00:15 crc kubenswrapper[5120]: I0122 12:00:15.082364 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbn7b"] Jan 22 12:00:15 crc kubenswrapper[5120]: I0122 12:00:15.083108 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="db99c964-abd0-4bc6-a71a-79a9c5a3c718" containerName="registry-server" Jan 22 12:00:15 crc kubenswrapper[5120]: I0122 12:00:15.083126 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="db99c964-abd0-4bc6-a71a-79a9c5a3c718" containerName="registry-server" Jan 22 12:00:15 crc kubenswrapper[5120]: I0122 12:00:15.083140 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d57ca8ee-4b8e-4b45-983a-11332a457cf8" containerName="collect-profiles" Jan 22 12:00:15 crc kubenswrapper[5120]: I0122 12:00:15.083147 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="d57ca8ee-4b8e-4b45-983a-11332a457cf8" containerName="collect-profiles" Jan 22 12:00:15 crc kubenswrapper[5120]: I0122 12:00:15.083159 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="db99c964-abd0-4bc6-a71a-79a9c5a3c718" containerName="extract-content" Jan 22 12:00:15 crc kubenswrapper[5120]: I0122 12:00:15.083166 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="db99c964-abd0-4bc6-a71a-79a9c5a3c718" containerName="extract-content" Jan 22 12:00:15 crc kubenswrapper[5120]: I0122 12:00:15.083177 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="db99c964-abd0-4bc6-a71a-79a9c5a3c718" containerName="extract-utilities" Jan 22 12:00:15 crc kubenswrapper[5120]: I0122 12:00:15.083185 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="db99c964-abd0-4bc6-a71a-79a9c5a3c718" containerName="extract-utilities" Jan 22 12:00:15 crc kubenswrapper[5120]: I0122 12:00:15.083339 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="d57ca8ee-4b8e-4b45-983a-11332a457cf8" containerName="collect-profiles" Jan 22 12:00:15 crc kubenswrapper[5120]: I0122 12:00:15.083355 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="db99c964-abd0-4bc6-a71a-79a9c5a3c718" containerName="registry-server" Jan 22 12:00:15 crc kubenswrapper[5120]: I0122 12:00:15.196732 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbn7b"] Jan 22 12:00:15 crc kubenswrapper[5120]: I0122 12:00:15.196921 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbn7b" Jan 22 12:00:15 crc kubenswrapper[5120]: I0122 12:00:15.199694 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Jan 22 12:00:15 crc kubenswrapper[5120]: I0122 12:00:15.251187 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/04591ad2-b41c-420f-9328-a9ff515b4e1e-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbn7b\" (UID: \"04591ad2-b41c-420f-9328-a9ff515b4e1e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbn7b" Jan 22 12:00:15 crc kubenswrapper[5120]: I0122 12:00:15.251572 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/04591ad2-b41c-420f-9328-a9ff515b4e1e-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbn7b\" (UID: \"04591ad2-b41c-420f-9328-a9ff515b4e1e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbn7b" Jan 22 12:00:15 crc kubenswrapper[5120]: I0122 12:00:15.251674 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xpn4\" (UniqueName: \"kubernetes.io/projected/04591ad2-b41c-420f-9328-a9ff515b4e1e-kube-api-access-2xpn4\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbn7b\" (UID: \"04591ad2-b41c-420f-9328-a9ff515b4e1e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbn7b" Jan 22 12:00:15 crc kubenswrapper[5120]: I0122 12:00:15.352818 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/04591ad2-b41c-420f-9328-a9ff515b4e1e-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbn7b\" (UID: \"04591ad2-b41c-420f-9328-a9ff515b4e1e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbn7b" Jan 22 12:00:15 crc kubenswrapper[5120]: I0122 12:00:15.353435 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2xpn4\" (UniqueName: \"kubernetes.io/projected/04591ad2-b41c-420f-9328-a9ff515b4e1e-kube-api-access-2xpn4\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbn7b\" (UID: \"04591ad2-b41c-420f-9328-a9ff515b4e1e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbn7b" Jan 22 12:00:15 crc kubenswrapper[5120]: I0122 12:00:15.353607 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/04591ad2-b41c-420f-9328-a9ff515b4e1e-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbn7b\" (UID: \"04591ad2-b41c-420f-9328-a9ff515b4e1e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbn7b" Jan 22 12:00:15 crc kubenswrapper[5120]: I0122 12:00:15.353623 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/04591ad2-b41c-420f-9328-a9ff515b4e1e-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbn7b\" (UID: \"04591ad2-b41c-420f-9328-a9ff515b4e1e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbn7b" Jan 22 12:00:15 crc kubenswrapper[5120]: I0122 12:00:15.353912 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/04591ad2-b41c-420f-9328-a9ff515b4e1e-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbn7b\" (UID: \"04591ad2-b41c-420f-9328-a9ff515b4e1e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbn7b" Jan 22 12:00:15 crc kubenswrapper[5120]: I0122 12:00:15.372386 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xpn4\" (UniqueName: \"kubernetes.io/projected/04591ad2-b41c-420f-9328-a9ff515b4e1e-kube-api-access-2xpn4\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbn7b\" (UID: \"04591ad2-b41c-420f-9328-a9ff515b4e1e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbn7b" Jan 22 12:00:15 crc kubenswrapper[5120]: I0122 12:00:15.515558 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbn7b" Jan 22 12:00:15 crc kubenswrapper[5120]: I0122 12:00:15.750201 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbn7b"] Jan 22 12:00:16 crc kubenswrapper[5120]: I0122 12:00:16.450409 5120 generic.go:358] "Generic (PLEG): container finished" podID="04591ad2-b41c-420f-9328-a9ff515b4e1e" containerID="2b450f0d994340ebedd7d257fe63748df13451d9c058ec3625914a0aaf1d9d77" exitCode=0 Jan 22 12:00:16 crc kubenswrapper[5120]: I0122 12:00:16.450493 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbn7b" event={"ID":"04591ad2-b41c-420f-9328-a9ff515b4e1e","Type":"ContainerDied","Data":"2b450f0d994340ebedd7d257fe63748df13451d9c058ec3625914a0aaf1d9d77"} Jan 22 12:00:16 crc kubenswrapper[5120]: I0122 12:00:16.450923 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbn7b" event={"ID":"04591ad2-b41c-420f-9328-a9ff515b4e1e","Type":"ContainerStarted","Data":"3cbd8e79b0bfe9d5f65f0fa9a41114f503e404da92d845186ed8ae61cb433ac6"} Jan 22 12:00:18 crc kubenswrapper[5120]: I0122 12:00:18.037744 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gppd2"] Jan 22 12:00:18 crc kubenswrapper[5120]: I0122 12:00:18.044859 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gppd2" Jan 22 12:00:18 crc kubenswrapper[5120]: I0122 12:00:18.056764 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gppd2"] Jan 22 12:00:18 crc kubenswrapper[5120]: I0122 12:00:18.101207 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23170abf-1fa3-4863-80e8-d7606fdeae60-utilities\") pod \"redhat-operators-gppd2\" (UID: \"23170abf-1fa3-4863-80e8-d7606fdeae60\") " pod="openshift-marketplace/redhat-operators-gppd2" Jan 22 12:00:18 crc kubenswrapper[5120]: I0122 12:00:18.101400 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23170abf-1fa3-4863-80e8-d7606fdeae60-catalog-content\") pod \"redhat-operators-gppd2\" (UID: \"23170abf-1fa3-4863-80e8-d7606fdeae60\") " pod="openshift-marketplace/redhat-operators-gppd2" Jan 22 12:00:18 crc kubenswrapper[5120]: I0122 12:00:18.101440 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5npt\" (UniqueName: \"kubernetes.io/projected/23170abf-1fa3-4863-80e8-d7606fdeae60-kube-api-access-v5npt\") pod \"redhat-operators-gppd2\" (UID: \"23170abf-1fa3-4863-80e8-d7606fdeae60\") " pod="openshift-marketplace/redhat-operators-gppd2" Jan 22 12:00:18 crc kubenswrapper[5120]: I0122 12:00:18.202784 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23170abf-1fa3-4863-80e8-d7606fdeae60-utilities\") pod \"redhat-operators-gppd2\" (UID: \"23170abf-1fa3-4863-80e8-d7606fdeae60\") " pod="openshift-marketplace/redhat-operators-gppd2" Jan 22 12:00:18 crc kubenswrapper[5120]: I0122 12:00:18.202902 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23170abf-1fa3-4863-80e8-d7606fdeae60-catalog-content\") pod \"redhat-operators-gppd2\" (UID: \"23170abf-1fa3-4863-80e8-d7606fdeae60\") " pod="openshift-marketplace/redhat-operators-gppd2" Jan 22 12:00:18 crc kubenswrapper[5120]: I0122 12:00:18.202927 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-v5npt\" (UniqueName: \"kubernetes.io/projected/23170abf-1fa3-4863-80e8-d7606fdeae60-kube-api-access-v5npt\") pod \"redhat-operators-gppd2\" (UID: \"23170abf-1fa3-4863-80e8-d7606fdeae60\") " pod="openshift-marketplace/redhat-operators-gppd2" Jan 22 12:00:18 crc kubenswrapper[5120]: I0122 12:00:18.203394 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23170abf-1fa3-4863-80e8-d7606fdeae60-utilities\") pod \"redhat-operators-gppd2\" (UID: \"23170abf-1fa3-4863-80e8-d7606fdeae60\") " pod="openshift-marketplace/redhat-operators-gppd2" Jan 22 12:00:18 crc kubenswrapper[5120]: I0122 12:00:18.203442 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23170abf-1fa3-4863-80e8-d7606fdeae60-catalog-content\") pod \"redhat-operators-gppd2\" (UID: \"23170abf-1fa3-4863-80e8-d7606fdeae60\") " pod="openshift-marketplace/redhat-operators-gppd2" Jan 22 12:00:18 crc kubenswrapper[5120]: I0122 12:00:18.223855 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5npt\" (UniqueName: \"kubernetes.io/projected/23170abf-1fa3-4863-80e8-d7606fdeae60-kube-api-access-v5npt\") pod \"redhat-operators-gppd2\" (UID: \"23170abf-1fa3-4863-80e8-d7606fdeae60\") " pod="openshift-marketplace/redhat-operators-gppd2" Jan 22 12:00:18 crc kubenswrapper[5120]: I0122 12:00:18.385911 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gppd2" Jan 22 12:00:18 crc kubenswrapper[5120]: I0122 12:00:18.482224 5120 generic.go:358] "Generic (PLEG): container finished" podID="04591ad2-b41c-420f-9328-a9ff515b4e1e" containerID="facf0e3289b882e9251e54633940bb8908cb9734e29c7069dbcf2f9c7d82dea8" exitCode=0 Jan 22 12:00:18 crc kubenswrapper[5120]: I0122 12:00:18.482398 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbn7b" event={"ID":"04591ad2-b41c-420f-9328-a9ff515b4e1e","Type":"ContainerDied","Data":"facf0e3289b882e9251e54633940bb8908cb9734e29c7069dbcf2f9c7d82dea8"} Jan 22 12:00:18 crc kubenswrapper[5120]: I0122 12:00:18.830403 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gppd2"] Jan 22 12:00:19 crc kubenswrapper[5120]: I0122 12:00:19.490470 5120 generic.go:358] "Generic (PLEG): container finished" podID="04591ad2-b41c-420f-9328-a9ff515b4e1e" containerID="99517dae7f9a7b3bdfa32446a1a6d06e3af1f8eddda207797f368f264143f4c6" exitCode=0 Jan 22 12:00:19 crc kubenswrapper[5120]: I0122 12:00:19.490573 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbn7b" event={"ID":"04591ad2-b41c-420f-9328-a9ff515b4e1e","Type":"ContainerDied","Data":"99517dae7f9a7b3bdfa32446a1a6d06e3af1f8eddda207797f368f264143f4c6"} Jan 22 12:00:19 crc kubenswrapper[5120]: I0122 12:00:19.492633 5120 generic.go:358] "Generic (PLEG): container finished" podID="23170abf-1fa3-4863-80e8-d7606fdeae60" containerID="c96918bf02933a8bbbdc17083708eca8a326f1b2c1a370b9d5b4d24b8940218e" exitCode=0 Jan 22 12:00:19 crc kubenswrapper[5120]: I0122 12:00:19.492738 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gppd2" event={"ID":"23170abf-1fa3-4863-80e8-d7606fdeae60","Type":"ContainerDied","Data":"c96918bf02933a8bbbdc17083708eca8a326f1b2c1a370b9d5b4d24b8940218e"} Jan 22 12:00:19 crc kubenswrapper[5120]: I0122 12:00:19.492792 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gppd2" event={"ID":"23170abf-1fa3-4863-80e8-d7606fdeae60","Type":"ContainerStarted","Data":"95dee903a35163143fb71dae252bdc46fab906f21721e1c598215d1ffc26c24e"} Jan 22 12:00:20 crc kubenswrapper[5120]: I0122 12:00:20.801778 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbn7b" Jan 22 12:00:20 crc kubenswrapper[5120]: I0122 12:00:20.840613 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/04591ad2-b41c-420f-9328-a9ff515b4e1e-util\") pod \"04591ad2-b41c-420f-9328-a9ff515b4e1e\" (UID: \"04591ad2-b41c-420f-9328-a9ff515b4e1e\") " Jan 22 12:00:20 crc kubenswrapper[5120]: I0122 12:00:20.840873 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/04591ad2-b41c-420f-9328-a9ff515b4e1e-bundle\") pod \"04591ad2-b41c-420f-9328-a9ff515b4e1e\" (UID: \"04591ad2-b41c-420f-9328-a9ff515b4e1e\") " Jan 22 12:00:20 crc kubenswrapper[5120]: I0122 12:00:20.841021 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xpn4\" (UniqueName: \"kubernetes.io/projected/04591ad2-b41c-420f-9328-a9ff515b4e1e-kube-api-access-2xpn4\") pod \"04591ad2-b41c-420f-9328-a9ff515b4e1e\" (UID: \"04591ad2-b41c-420f-9328-a9ff515b4e1e\") " Jan 22 12:00:20 crc kubenswrapper[5120]: I0122 12:00:20.845243 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04591ad2-b41c-420f-9328-a9ff515b4e1e-bundle" (OuterVolumeSpecName: "bundle") pod "04591ad2-b41c-420f-9328-a9ff515b4e1e" (UID: "04591ad2-b41c-420f-9328-a9ff515b4e1e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:00:20 crc kubenswrapper[5120]: I0122 12:00:20.856653 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04591ad2-b41c-420f-9328-a9ff515b4e1e-util" (OuterVolumeSpecName: "util") pod "04591ad2-b41c-420f-9328-a9ff515b4e1e" (UID: "04591ad2-b41c-420f-9328-a9ff515b4e1e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:00:20 crc kubenswrapper[5120]: I0122 12:00:20.867447 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04591ad2-b41c-420f-9328-a9ff515b4e1e-kube-api-access-2xpn4" (OuterVolumeSpecName: "kube-api-access-2xpn4") pod "04591ad2-b41c-420f-9328-a9ff515b4e1e" (UID: "04591ad2-b41c-420f-9328-a9ff515b4e1e"). InnerVolumeSpecName "kube-api-access-2xpn4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:00:20 crc kubenswrapper[5120]: I0122 12:00:20.943917 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2xpn4\" (UniqueName: \"kubernetes.io/projected/04591ad2-b41c-420f-9328-a9ff515b4e1e-kube-api-access-2xpn4\") on node \"crc\" DevicePath \"\"" Jan 22 12:00:20 crc kubenswrapper[5120]: I0122 12:00:20.944027 5120 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/04591ad2-b41c-420f-9328-a9ff515b4e1e-util\") on node \"crc\" DevicePath \"\"" Jan 22 12:00:20 crc kubenswrapper[5120]: I0122 12:00:20.944049 5120 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/04591ad2-b41c-420f-9328-a9ff515b4e1e-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 12:00:21 crc kubenswrapper[5120]: I0122 12:00:21.515796 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbn7b" event={"ID":"04591ad2-b41c-420f-9328-a9ff515b4e1e","Type":"ContainerDied","Data":"3cbd8e79b0bfe9d5f65f0fa9a41114f503e404da92d845186ed8ae61cb433ac6"} Jan 22 12:00:21 crc kubenswrapper[5120]: I0122 12:00:21.516890 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3cbd8e79b0bfe9d5f65f0fa9a41114f503e404da92d845186ed8ae61cb433ac6" Jan 22 12:00:21 crc kubenswrapper[5120]: I0122 12:00:21.515850 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbn7b" Jan 22 12:00:21 crc kubenswrapper[5120]: I0122 12:00:21.519020 5120 generic.go:358] "Generic (PLEG): container finished" podID="23170abf-1fa3-4863-80e8-d7606fdeae60" containerID="66e3ce0d5b91442255cccccbc69a52d9b1cd60932f6ea9f75d5c1e6c0d86293b" exitCode=0 Jan 22 12:00:21 crc kubenswrapper[5120]: I0122 12:00:21.519096 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gppd2" event={"ID":"23170abf-1fa3-4863-80e8-d7606fdeae60","Type":"ContainerDied","Data":"66e3ce0d5b91442255cccccbc69a52d9b1cd60932f6ea9f75d5c1e6c0d86293b"} Jan 22 12:00:22 crc kubenswrapper[5120]: I0122 12:00:22.531185 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gppd2" event={"ID":"23170abf-1fa3-4863-80e8-d7606fdeae60","Type":"ContainerStarted","Data":"c8583c0d66e4a79dcf5605df70356896f577ca9dfb1ef4bbebd62aabfc59bffd"} Jan 22 12:00:22 crc kubenswrapper[5120]: I0122 12:00:22.534543 5120 generic.go:358] "Generic (PLEG): container finished" podID="ee0a1780-1d96-46a3-8386-55404b6d1299" containerID="a76aaf951602603ba06dd3faa64300e242c288026ffa56088b05a6f5a164c1d1" exitCode=0 Jan 22 12:00:22 crc kubenswrapper[5120]: I0122 12:00:22.534677 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484720-f92nq" event={"ID":"ee0a1780-1d96-46a3-8386-55404b6d1299","Type":"ContainerDied","Data":"a76aaf951602603ba06dd3faa64300e242c288026ffa56088b05a6f5a164c1d1"} Jan 22 12:00:22 crc kubenswrapper[5120]: I0122 12:00:22.566037 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gppd2" podStartSLOduration=3.66442423 podStartE2EDuration="4.566002632s" podCreationTimestamp="2026-01-22 12:00:18 +0000 UTC" firstStartedPulling="2026-01-22 12:00:19.49386963 +0000 UTC m=+754.237817991" lastFinishedPulling="2026-01-22 12:00:20.395448012 +0000 UTC m=+755.139396393" observedRunningTime="2026-01-22 12:00:22.560281133 +0000 UTC m=+757.304229534" watchObservedRunningTime="2026-01-22 12:00:22.566002632 +0000 UTC m=+757.309951013" Jan 22 12:00:23 crc kubenswrapper[5120]: I0122 12:00:23.800172 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484720-f92nq" Jan 22 12:00:23 crc kubenswrapper[5120]: I0122 12:00:23.886523 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zq25\" (UniqueName: \"kubernetes.io/projected/ee0a1780-1d96-46a3-8386-55404b6d1299-kube-api-access-6zq25\") pod \"ee0a1780-1d96-46a3-8386-55404b6d1299\" (UID: \"ee0a1780-1d96-46a3-8386-55404b6d1299\") " Jan 22 12:00:23 crc kubenswrapper[5120]: I0122 12:00:23.898731 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee0a1780-1d96-46a3-8386-55404b6d1299-kube-api-access-6zq25" (OuterVolumeSpecName: "kube-api-access-6zq25") pod "ee0a1780-1d96-46a3-8386-55404b6d1299" (UID: "ee0a1780-1d96-46a3-8386-55404b6d1299"). InnerVolumeSpecName "kube-api-access-6zq25". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:00:23 crc kubenswrapper[5120]: I0122 12:00:23.989230 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6zq25\" (UniqueName: \"kubernetes.io/projected/ee0a1780-1d96-46a3-8386-55404b6d1299-kube-api-access-6zq25\") on node \"crc\" DevicePath \"\"" Jan 22 12:00:24 crc kubenswrapper[5120]: I0122 12:00:24.076027 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e86dw6"] Jan 22 12:00:24 crc kubenswrapper[5120]: I0122 12:00:24.076803 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="04591ad2-b41c-420f-9328-a9ff515b4e1e" containerName="pull" Jan 22 12:00:24 crc kubenswrapper[5120]: I0122 12:00:24.076829 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="04591ad2-b41c-420f-9328-a9ff515b4e1e" containerName="pull" Jan 22 12:00:24 crc kubenswrapper[5120]: I0122 12:00:24.076857 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ee0a1780-1d96-46a3-8386-55404b6d1299" containerName="oc" Jan 22 12:00:24 crc kubenswrapper[5120]: I0122 12:00:24.076866 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee0a1780-1d96-46a3-8386-55404b6d1299" containerName="oc" Jan 22 12:00:24 crc kubenswrapper[5120]: I0122 12:00:24.076879 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="04591ad2-b41c-420f-9328-a9ff515b4e1e" containerName="util" Jan 22 12:00:24 crc kubenswrapper[5120]: I0122 12:00:24.076887 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="04591ad2-b41c-420f-9328-a9ff515b4e1e" containerName="util" Jan 22 12:00:24 crc kubenswrapper[5120]: I0122 12:00:24.076905 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="04591ad2-b41c-420f-9328-a9ff515b4e1e" containerName="extract" Jan 22 12:00:24 crc kubenswrapper[5120]: I0122 12:00:24.076912 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="04591ad2-b41c-420f-9328-a9ff515b4e1e" containerName="extract" Jan 22 12:00:24 crc kubenswrapper[5120]: I0122 12:00:24.077059 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="ee0a1780-1d96-46a3-8386-55404b6d1299" containerName="oc" Jan 22 12:00:24 crc kubenswrapper[5120]: I0122 12:00:24.077085 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="04591ad2-b41c-420f-9328-a9ff515b4e1e" containerName="extract" Jan 22 12:00:24 crc kubenswrapper[5120]: I0122 12:00:24.083893 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e86dw6" Jan 22 12:00:24 crc kubenswrapper[5120]: I0122 12:00:24.088540 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e86dw6"] Jan 22 12:00:24 crc kubenswrapper[5120]: I0122 12:00:24.090451 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Jan 22 12:00:24 crc kubenswrapper[5120]: I0122 12:00:24.192845 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6ae07b37-44a2-4e47-abb9-5587cb866c3b-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e86dw6\" (UID: \"6ae07b37-44a2-4e47-abb9-5587cb866c3b\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e86dw6" Jan 22 12:00:24 crc kubenswrapper[5120]: I0122 12:00:24.193669 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6ae07b37-44a2-4e47-abb9-5587cb866c3b-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e86dw6\" (UID: \"6ae07b37-44a2-4e47-abb9-5587cb866c3b\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e86dw6" Jan 22 12:00:24 crc kubenswrapper[5120]: I0122 12:00:24.193946 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgdjq\" (UniqueName: \"kubernetes.io/projected/6ae07b37-44a2-4e47-abb9-5587cb866c3b-kube-api-access-qgdjq\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e86dw6\" (UID: \"6ae07b37-44a2-4e47-abb9-5587cb866c3b\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e86dw6" Jan 22 12:00:24 crc kubenswrapper[5120]: I0122 12:00:24.296077 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qgdjq\" (UniqueName: \"kubernetes.io/projected/6ae07b37-44a2-4e47-abb9-5587cb866c3b-kube-api-access-qgdjq\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e86dw6\" (UID: \"6ae07b37-44a2-4e47-abb9-5587cb866c3b\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e86dw6" Jan 22 12:00:24 crc kubenswrapper[5120]: I0122 12:00:24.296212 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6ae07b37-44a2-4e47-abb9-5587cb866c3b-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e86dw6\" (UID: \"6ae07b37-44a2-4e47-abb9-5587cb866c3b\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e86dw6" Jan 22 12:00:24 crc kubenswrapper[5120]: I0122 12:00:24.296398 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6ae07b37-44a2-4e47-abb9-5587cb866c3b-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e86dw6\" (UID: \"6ae07b37-44a2-4e47-abb9-5587cb866c3b\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e86dw6" Jan 22 12:00:24 crc kubenswrapper[5120]: I0122 12:00:24.296761 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6ae07b37-44a2-4e47-abb9-5587cb866c3b-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e86dw6\" (UID: \"6ae07b37-44a2-4e47-abb9-5587cb866c3b\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e86dw6" Jan 22 12:00:24 crc kubenswrapper[5120]: I0122 12:00:24.297134 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6ae07b37-44a2-4e47-abb9-5587cb866c3b-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e86dw6\" (UID: \"6ae07b37-44a2-4e47-abb9-5587cb866c3b\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e86dw6" Jan 22 12:00:24 crc kubenswrapper[5120]: I0122 12:00:24.320035 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgdjq\" (UniqueName: \"kubernetes.io/projected/6ae07b37-44a2-4e47-abb9-5587cb866c3b-kube-api-access-qgdjq\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e86dw6\" (UID: \"6ae07b37-44a2-4e47-abb9-5587cb866c3b\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e86dw6" Jan 22 12:00:24 crc kubenswrapper[5120]: I0122 12:00:24.409384 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e86dw6" Jan 22 12:00:24 crc kubenswrapper[5120]: I0122 12:00:24.552499 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484720-f92nq" event={"ID":"ee0a1780-1d96-46a3-8386-55404b6d1299","Type":"ContainerDied","Data":"1eb12823267a042c8a078657072b8ba02586a08840e4a77c20ef76a66c21b12d"} Jan 22 12:00:24 crc kubenswrapper[5120]: I0122 12:00:24.552568 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1eb12823267a042c8a078657072b8ba02586a08840e4a77c20ef76a66c21b12d" Jan 22 12:00:24 crc kubenswrapper[5120]: I0122 12:00:24.552660 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484720-f92nq" Jan 22 12:00:24 crc kubenswrapper[5120]: I0122 12:00:24.651163 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e86dw6"] Jan 22 12:00:24 crc kubenswrapper[5120]: W0122 12:00:24.656307 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ae07b37_44a2_4e47_abb9_5587cb866c3b.slice/crio-91912c78c469cc18ad63184fb62a893742329c43cbe307b343e4eae7acbe1b44 WatchSource:0}: Error finding container 91912c78c469cc18ad63184fb62a893742329c43cbe307b343e4eae7acbe1b44: Status 404 returned error can't find the container with id 91912c78c469cc18ad63184fb62a893742329c43cbe307b343e4eae7acbe1b44 Jan 22 12:00:25 crc kubenswrapper[5120]: I0122 12:00:25.097272 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbnzxn"] Jan 22 12:00:25 crc kubenswrapper[5120]: I0122 12:00:25.105037 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbnzxn" Jan 22 12:00:25 crc kubenswrapper[5120]: I0122 12:00:25.109046 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbnzxn"] Jan 22 12:00:25 crc kubenswrapper[5120]: I0122 12:00:25.211206 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ddhn\" (UniqueName: \"kubernetes.io/projected/6451a1e2-e63d-4a21-bab9-c97f9b2c9236-kube-api-access-5ddhn\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbnzxn\" (UID: \"6451a1e2-e63d-4a21-bab9-c97f9b2c9236\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbnzxn" Jan 22 12:00:25 crc kubenswrapper[5120]: I0122 12:00:25.211377 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6451a1e2-e63d-4a21-bab9-c97f9b2c9236-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbnzxn\" (UID: \"6451a1e2-e63d-4a21-bab9-c97f9b2c9236\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbnzxn" Jan 22 12:00:25 crc kubenswrapper[5120]: I0122 12:00:25.211571 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6451a1e2-e63d-4a21-bab9-c97f9b2c9236-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbnzxn\" (UID: \"6451a1e2-e63d-4a21-bab9-c97f9b2c9236\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbnzxn" Jan 22 12:00:25 crc kubenswrapper[5120]: I0122 12:00:25.313679 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5ddhn\" (UniqueName: \"kubernetes.io/projected/6451a1e2-e63d-4a21-bab9-c97f9b2c9236-kube-api-access-5ddhn\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbnzxn\" (UID: \"6451a1e2-e63d-4a21-bab9-c97f9b2c9236\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbnzxn" Jan 22 12:00:25 crc kubenswrapper[5120]: I0122 12:00:25.313761 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6451a1e2-e63d-4a21-bab9-c97f9b2c9236-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbnzxn\" (UID: \"6451a1e2-e63d-4a21-bab9-c97f9b2c9236\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbnzxn" Jan 22 12:00:25 crc kubenswrapper[5120]: I0122 12:00:25.313973 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6451a1e2-e63d-4a21-bab9-c97f9b2c9236-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbnzxn\" (UID: \"6451a1e2-e63d-4a21-bab9-c97f9b2c9236\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbnzxn" Jan 22 12:00:25 crc kubenswrapper[5120]: I0122 12:00:25.314586 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6451a1e2-e63d-4a21-bab9-c97f9b2c9236-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbnzxn\" (UID: \"6451a1e2-e63d-4a21-bab9-c97f9b2c9236\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbnzxn" Jan 22 12:00:25 crc kubenswrapper[5120]: I0122 12:00:25.315093 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6451a1e2-e63d-4a21-bab9-c97f9b2c9236-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbnzxn\" (UID: \"6451a1e2-e63d-4a21-bab9-c97f9b2c9236\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbnzxn" Jan 22 12:00:25 crc kubenswrapper[5120]: I0122 12:00:25.340988 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ddhn\" (UniqueName: \"kubernetes.io/projected/6451a1e2-e63d-4a21-bab9-c97f9b2c9236-kube-api-access-5ddhn\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbnzxn\" (UID: \"6451a1e2-e63d-4a21-bab9-c97f9b2c9236\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbnzxn" Jan 22 12:00:25 crc kubenswrapper[5120]: I0122 12:00:25.431437 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbnzxn" Jan 22 12:00:25 crc kubenswrapper[5120]: I0122 12:00:25.566208 5120 generic.go:358] "Generic (PLEG): container finished" podID="6ae07b37-44a2-4e47-abb9-5587cb866c3b" containerID="5a7eac1401ed4fb13883b23933c3760dfa0683d239946b53867596a24b0b4cff" exitCode=0 Jan 22 12:00:25 crc kubenswrapper[5120]: I0122 12:00:25.567202 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e86dw6" event={"ID":"6ae07b37-44a2-4e47-abb9-5587cb866c3b","Type":"ContainerDied","Data":"5a7eac1401ed4fb13883b23933c3760dfa0683d239946b53867596a24b0b4cff"} Jan 22 12:00:25 crc kubenswrapper[5120]: I0122 12:00:25.567718 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e86dw6" event={"ID":"6ae07b37-44a2-4e47-abb9-5587cb866c3b","Type":"ContainerStarted","Data":"91912c78c469cc18ad63184fb62a893742329c43cbe307b343e4eae7acbe1b44"} Jan 22 12:00:25 crc kubenswrapper[5120]: I0122 12:00:25.702004 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbnzxn"] Jan 22 12:00:26 crc kubenswrapper[5120]: I0122 12:00:26.577101 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbnzxn" event={"ID":"6451a1e2-e63d-4a21-bab9-c97f9b2c9236","Type":"ContainerStarted","Data":"0b83c7bce79b0ae49b716cede97a00d45ebfb57b219ce5ae3b614cc43f978569"} Jan 22 12:00:26 crc kubenswrapper[5120]: I0122 12:00:26.577567 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbnzxn" event={"ID":"6451a1e2-e63d-4a21-bab9-c97f9b2c9236","Type":"ContainerStarted","Data":"ce9e278098fe76d57c98f8549cea11c041bae3dca21cc3da02281b6c0192fbf5"} Jan 22 12:00:27 crc kubenswrapper[5120]: I0122 12:00:27.589001 5120 generic.go:358] "Generic (PLEG): container finished" podID="6451a1e2-e63d-4a21-bab9-c97f9b2c9236" containerID="0b83c7bce79b0ae49b716cede97a00d45ebfb57b219ce5ae3b614cc43f978569" exitCode=0 Jan 22 12:00:27 crc kubenswrapper[5120]: I0122 12:00:27.589166 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbnzxn" event={"ID":"6451a1e2-e63d-4a21-bab9-c97f9b2c9236","Type":"ContainerDied","Data":"0b83c7bce79b0ae49b716cede97a00d45ebfb57b219ce5ae3b614cc43f978569"} Jan 22 12:00:28 crc kubenswrapper[5120]: I0122 12:00:28.386092 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-gppd2" Jan 22 12:00:28 crc kubenswrapper[5120]: I0122 12:00:28.386246 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gppd2" Jan 22 12:00:28 crc kubenswrapper[5120]: I0122 12:00:28.459516 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gppd2" Jan 22 12:00:28 crc kubenswrapper[5120]: I0122 12:00:28.611148 5120 generic.go:358] "Generic (PLEG): container finished" podID="6ae07b37-44a2-4e47-abb9-5587cb866c3b" containerID="b37c38b1002ae37fd9ff7c238483d69f23a331a6f3e37e5457de3788313dbb4b" exitCode=0 Jan 22 12:00:28 crc kubenswrapper[5120]: I0122 12:00:28.611271 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e86dw6" event={"ID":"6ae07b37-44a2-4e47-abb9-5587cb866c3b","Type":"ContainerDied","Data":"b37c38b1002ae37fd9ff7c238483d69f23a331a6f3e37e5457de3788313dbb4b"} Jan 22 12:00:28 crc kubenswrapper[5120]: I0122 12:00:28.719134 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gppd2" Jan 22 12:00:28 crc kubenswrapper[5120]: I0122 12:00:28.841797 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zkkb7"] Jan 22 12:00:29 crc kubenswrapper[5120]: I0122 12:00:29.035636 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zkkb7"] Jan 22 12:00:29 crc kubenswrapper[5120]: I0122 12:00:29.035808 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zkkb7" Jan 22 12:00:29 crc kubenswrapper[5120]: I0122 12:00:29.188074 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b5a6248-a718-4c8c-b2d8-26c979672691-catalog-content\") pod \"certified-operators-zkkb7\" (UID: \"8b5a6248-a718-4c8c-b2d8-26c979672691\") " pod="openshift-marketplace/certified-operators-zkkb7" Jan 22 12:00:29 crc kubenswrapper[5120]: I0122 12:00:29.188153 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b5a6248-a718-4c8c-b2d8-26c979672691-utilities\") pod \"certified-operators-zkkb7\" (UID: \"8b5a6248-a718-4c8c-b2d8-26c979672691\") " pod="openshift-marketplace/certified-operators-zkkb7" Jan 22 12:00:29 crc kubenswrapper[5120]: I0122 12:00:29.188185 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wpw2\" (UniqueName: \"kubernetes.io/projected/8b5a6248-a718-4c8c-b2d8-26c979672691-kube-api-access-4wpw2\") pod \"certified-operators-zkkb7\" (UID: \"8b5a6248-a718-4c8c-b2d8-26c979672691\") " pod="openshift-marketplace/certified-operators-zkkb7" Jan 22 12:00:29 crc kubenswrapper[5120]: I0122 12:00:29.290000 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b5a6248-a718-4c8c-b2d8-26c979672691-catalog-content\") pod \"certified-operators-zkkb7\" (UID: \"8b5a6248-a718-4c8c-b2d8-26c979672691\") " pod="openshift-marketplace/certified-operators-zkkb7" Jan 22 12:00:29 crc kubenswrapper[5120]: I0122 12:00:29.290083 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b5a6248-a718-4c8c-b2d8-26c979672691-utilities\") pod \"certified-operators-zkkb7\" (UID: \"8b5a6248-a718-4c8c-b2d8-26c979672691\") " pod="openshift-marketplace/certified-operators-zkkb7" Jan 22 12:00:29 crc kubenswrapper[5120]: I0122 12:00:29.290113 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4wpw2\" (UniqueName: \"kubernetes.io/projected/8b5a6248-a718-4c8c-b2d8-26c979672691-kube-api-access-4wpw2\") pod \"certified-operators-zkkb7\" (UID: \"8b5a6248-a718-4c8c-b2d8-26c979672691\") " pod="openshift-marketplace/certified-operators-zkkb7" Jan 22 12:00:29 crc kubenswrapper[5120]: I0122 12:00:29.291485 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b5a6248-a718-4c8c-b2d8-26c979672691-catalog-content\") pod \"certified-operators-zkkb7\" (UID: \"8b5a6248-a718-4c8c-b2d8-26c979672691\") " pod="openshift-marketplace/certified-operators-zkkb7" Jan 22 12:00:29 crc kubenswrapper[5120]: I0122 12:00:29.291746 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b5a6248-a718-4c8c-b2d8-26c979672691-utilities\") pod \"certified-operators-zkkb7\" (UID: \"8b5a6248-a718-4c8c-b2d8-26c979672691\") " pod="openshift-marketplace/certified-operators-zkkb7" Jan 22 12:00:29 crc kubenswrapper[5120]: I0122 12:00:29.336732 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wpw2\" (UniqueName: \"kubernetes.io/projected/8b5a6248-a718-4c8c-b2d8-26c979672691-kube-api-access-4wpw2\") pod \"certified-operators-zkkb7\" (UID: \"8b5a6248-a718-4c8c-b2d8-26c979672691\") " pod="openshift-marketplace/certified-operators-zkkb7" Jan 22 12:00:29 crc kubenswrapper[5120]: I0122 12:00:29.387275 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zkkb7" Jan 22 12:00:29 crc kubenswrapper[5120]: I0122 12:00:29.676222 5120 generic.go:358] "Generic (PLEG): container finished" podID="6ae07b37-44a2-4e47-abb9-5587cb866c3b" containerID="8a17a3cad236fdd8f7ff096c755d24c506711e1ef238f52220a595513ba9515d" exitCode=0 Jan 22 12:00:29 crc kubenswrapper[5120]: I0122 12:00:29.678537 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e86dw6" event={"ID":"6ae07b37-44a2-4e47-abb9-5587cb866c3b","Type":"ContainerDied","Data":"8a17a3cad236fdd8f7ff096c755d24c506711e1ef238f52220a595513ba9515d"} Jan 22 12:00:29 crc kubenswrapper[5120]: I0122 12:00:29.932122 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zkkb7"] Jan 22 12:00:30 crc kubenswrapper[5120]: I0122 12:00:30.432716 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87lvz"] Jan 22 12:00:30 crc kubenswrapper[5120]: I0122 12:00:30.488994 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87lvz"] Jan 22 12:00:30 crc kubenswrapper[5120]: I0122 12:00:30.489207 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87lvz" Jan 22 12:00:30 crc kubenswrapper[5120]: I0122 12:00:30.609147 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5915ccea-14c1-48c1-8e09-9cc508bb150e-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87lvz\" (UID: \"5915ccea-14c1-48c1-8e09-9cc508bb150e\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87lvz" Jan 22 12:00:30 crc kubenswrapper[5120]: I0122 12:00:30.609213 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwkkg\" (UniqueName: \"kubernetes.io/projected/5915ccea-14c1-48c1-8e09-9cc508bb150e-kube-api-access-bwkkg\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87lvz\" (UID: \"5915ccea-14c1-48c1-8e09-9cc508bb150e\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87lvz" Jan 22 12:00:30 crc kubenswrapper[5120]: I0122 12:00:30.609245 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5915ccea-14c1-48c1-8e09-9cc508bb150e-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87lvz\" (UID: \"5915ccea-14c1-48c1-8e09-9cc508bb150e\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87lvz" Jan 22 12:00:30 crc kubenswrapper[5120]: I0122 12:00:30.686355 5120 generic.go:358] "Generic (PLEG): container finished" podID="8b5a6248-a718-4c8c-b2d8-26c979672691" containerID="04e24cb4471e14d51fe8e02cf81f81f2adb50f52b16ddc7ba687333846cda4bb" exitCode=0 Jan 22 12:00:30 crc kubenswrapper[5120]: I0122 12:00:30.686502 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zkkb7" event={"ID":"8b5a6248-a718-4c8c-b2d8-26c979672691","Type":"ContainerDied","Data":"04e24cb4471e14d51fe8e02cf81f81f2adb50f52b16ddc7ba687333846cda4bb"} Jan 22 12:00:30 crc kubenswrapper[5120]: I0122 12:00:30.686543 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zkkb7" event={"ID":"8b5a6248-a718-4c8c-b2d8-26c979672691","Type":"ContainerStarted","Data":"265be79110a72ebba1156eec2a58e1e49b4bd06b96371e08bd346f68e3921b3b"} Jan 22 12:00:30 crc kubenswrapper[5120]: I0122 12:00:30.691438 5120 generic.go:358] "Generic (PLEG): container finished" podID="6451a1e2-e63d-4a21-bab9-c97f9b2c9236" containerID="8b2d8fb2b5ba83e645b5a7d4d15c755bd2b03fec8b886275e1e00e02c2fe4b16" exitCode=0 Jan 22 12:00:30 crc kubenswrapper[5120]: I0122 12:00:30.691607 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbnzxn" event={"ID":"6451a1e2-e63d-4a21-bab9-c97f9b2c9236","Type":"ContainerDied","Data":"8b2d8fb2b5ba83e645b5a7d4d15c755bd2b03fec8b886275e1e00e02c2fe4b16"} Jan 22 12:00:30 crc kubenswrapper[5120]: I0122 12:00:30.713196 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5915ccea-14c1-48c1-8e09-9cc508bb150e-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87lvz\" (UID: \"5915ccea-14c1-48c1-8e09-9cc508bb150e\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87lvz" Jan 22 12:00:30 crc kubenswrapper[5120]: I0122 12:00:30.713495 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bwkkg\" (UniqueName: \"kubernetes.io/projected/5915ccea-14c1-48c1-8e09-9cc508bb150e-kube-api-access-bwkkg\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87lvz\" (UID: \"5915ccea-14c1-48c1-8e09-9cc508bb150e\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87lvz" Jan 22 12:00:30 crc kubenswrapper[5120]: I0122 12:00:30.713672 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5915ccea-14c1-48c1-8e09-9cc508bb150e-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87lvz\" (UID: \"5915ccea-14c1-48c1-8e09-9cc508bb150e\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87lvz" Jan 22 12:00:30 crc kubenswrapper[5120]: I0122 12:00:30.714271 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5915ccea-14c1-48c1-8e09-9cc508bb150e-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87lvz\" (UID: \"5915ccea-14c1-48c1-8e09-9cc508bb150e\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87lvz" Jan 22 12:00:30 crc kubenswrapper[5120]: I0122 12:00:30.714384 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5915ccea-14c1-48c1-8e09-9cc508bb150e-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87lvz\" (UID: \"5915ccea-14c1-48c1-8e09-9cc508bb150e\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87lvz" Jan 22 12:00:30 crc kubenswrapper[5120]: I0122 12:00:30.754025 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwkkg\" (UniqueName: \"kubernetes.io/projected/5915ccea-14c1-48c1-8e09-9cc508bb150e-kube-api-access-bwkkg\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87lvz\" (UID: \"5915ccea-14c1-48c1-8e09-9cc508bb150e\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87lvz" Jan 22 12:00:30 crc kubenswrapper[5120]: I0122 12:00:30.952228 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87lvz" Jan 22 12:00:31 crc kubenswrapper[5120]: I0122 12:00:31.103212 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e86dw6" Jan 22 12:00:31 crc kubenswrapper[5120]: I0122 12:00:31.222858 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6ae07b37-44a2-4e47-abb9-5587cb866c3b-bundle\") pod \"6ae07b37-44a2-4e47-abb9-5587cb866c3b\" (UID: \"6ae07b37-44a2-4e47-abb9-5587cb866c3b\") " Jan 22 12:00:31 crc kubenswrapper[5120]: I0122 12:00:31.223140 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6ae07b37-44a2-4e47-abb9-5587cb866c3b-util\") pod \"6ae07b37-44a2-4e47-abb9-5587cb866c3b\" (UID: \"6ae07b37-44a2-4e47-abb9-5587cb866c3b\") " Jan 22 12:00:31 crc kubenswrapper[5120]: I0122 12:00:31.223172 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgdjq\" (UniqueName: \"kubernetes.io/projected/6ae07b37-44a2-4e47-abb9-5587cb866c3b-kube-api-access-qgdjq\") pod \"6ae07b37-44a2-4e47-abb9-5587cb866c3b\" (UID: \"6ae07b37-44a2-4e47-abb9-5587cb866c3b\") " Jan 22 12:00:31 crc kubenswrapper[5120]: I0122 12:00:31.224290 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ae07b37-44a2-4e47-abb9-5587cb866c3b-bundle" (OuterVolumeSpecName: "bundle") pod "6ae07b37-44a2-4e47-abb9-5587cb866c3b" (UID: "6ae07b37-44a2-4e47-abb9-5587cb866c3b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:00:31 crc kubenswrapper[5120]: I0122 12:00:31.233103 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ae07b37-44a2-4e47-abb9-5587cb866c3b-util" (OuterVolumeSpecName: "util") pod "6ae07b37-44a2-4e47-abb9-5587cb866c3b" (UID: "6ae07b37-44a2-4e47-abb9-5587cb866c3b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:00:31 crc kubenswrapper[5120]: I0122 12:00:31.252205 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ae07b37-44a2-4e47-abb9-5587cb866c3b-kube-api-access-qgdjq" (OuterVolumeSpecName: "kube-api-access-qgdjq") pod "6ae07b37-44a2-4e47-abb9-5587cb866c3b" (UID: "6ae07b37-44a2-4e47-abb9-5587cb866c3b"). InnerVolumeSpecName "kube-api-access-qgdjq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:00:31 crc kubenswrapper[5120]: I0122 12:00:31.323479 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87lvz"] Jan 22 12:00:31 crc kubenswrapper[5120]: I0122 12:00:31.324344 5120 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6ae07b37-44a2-4e47-abb9-5587cb866c3b-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 12:00:31 crc kubenswrapper[5120]: I0122 12:00:31.324365 5120 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6ae07b37-44a2-4e47-abb9-5587cb866c3b-util\") on node \"crc\" DevicePath \"\"" Jan 22 12:00:31 crc kubenswrapper[5120]: I0122 12:00:31.324377 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qgdjq\" (UniqueName: \"kubernetes.io/projected/6ae07b37-44a2-4e47-abb9-5587cb866c3b-kube-api-access-qgdjq\") on node \"crc\" DevicePath \"\"" Jan 22 12:00:31 crc kubenswrapper[5120]: I0122 12:00:31.698449 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zkkb7" event={"ID":"8b5a6248-a718-4c8c-b2d8-26c979672691","Type":"ContainerStarted","Data":"e8419fe5302c5032adf86949d7fc07ce99ef94c635247658e099ceb729e4276a"} Jan 22 12:00:31 crc kubenswrapper[5120]: I0122 12:00:31.701118 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87lvz" event={"ID":"5915ccea-14c1-48c1-8e09-9cc508bb150e","Type":"ContainerStarted","Data":"2230e816d937f4b0f1d284a8c7efbd0a7ba111f1bf9693e2f9b6418177a7f0bd"} Jan 22 12:00:31 crc kubenswrapper[5120]: I0122 12:00:31.701147 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87lvz" event={"ID":"5915ccea-14c1-48c1-8e09-9cc508bb150e","Type":"ContainerStarted","Data":"0acb727f11a4fa06c57cb8d1ffde7d59f3b3547f9a2d5b94ff706f6704b9f81a"} Jan 22 12:00:31 crc kubenswrapper[5120]: I0122 12:00:31.709315 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbnzxn" event={"ID":"6451a1e2-e63d-4a21-bab9-c97f9b2c9236","Type":"ContainerStarted","Data":"af13e96cefb9396e0b0ec76ac06165a744b48f2baf953b0f5556adb371a150da"} Jan 22 12:00:31 crc kubenswrapper[5120]: I0122 12:00:31.726479 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e86dw6" event={"ID":"6ae07b37-44a2-4e47-abb9-5587cb866c3b","Type":"ContainerDied","Data":"91912c78c469cc18ad63184fb62a893742329c43cbe307b343e4eae7acbe1b44"} Jan 22 12:00:31 crc kubenswrapper[5120]: I0122 12:00:31.726526 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91912c78c469cc18ad63184fb62a893742329c43cbe307b343e4eae7acbe1b44" Jan 22 12:00:31 crc kubenswrapper[5120]: I0122 12:00:31.726589 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e86dw6" Jan 22 12:00:31 crc kubenswrapper[5120]: I0122 12:00:31.834326 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbnzxn" podStartSLOduration=4.888308789 podStartE2EDuration="6.834295841s" podCreationTimestamp="2026-01-22 12:00:25 +0000 UTC" firstStartedPulling="2026-01-22 12:00:27.592418358 +0000 UTC m=+762.336366699" lastFinishedPulling="2026-01-22 12:00:29.53840541 +0000 UTC m=+764.282353751" observedRunningTime="2026-01-22 12:00:31.833859181 +0000 UTC m=+766.577807532" watchObservedRunningTime="2026-01-22 12:00:31.834295841 +0000 UTC m=+766.578244182" Jan 22 12:00:32 crc kubenswrapper[5120]: I0122 12:00:32.736418 5120 generic.go:358] "Generic (PLEG): container finished" podID="6451a1e2-e63d-4a21-bab9-c97f9b2c9236" containerID="af13e96cefb9396e0b0ec76ac06165a744b48f2baf953b0f5556adb371a150da" exitCode=0 Jan 22 12:00:32 crc kubenswrapper[5120]: I0122 12:00:32.736504 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbnzxn" event={"ID":"6451a1e2-e63d-4a21-bab9-c97f9b2c9236","Type":"ContainerDied","Data":"af13e96cefb9396e0b0ec76ac06165a744b48f2baf953b0f5556adb371a150da"} Jan 22 12:00:33 crc kubenswrapper[5120]: I0122 12:00:33.219595 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gppd2"] Jan 22 12:00:33 crc kubenswrapper[5120]: I0122 12:00:33.219972 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gppd2" podUID="23170abf-1fa3-4863-80e8-d7606fdeae60" containerName="registry-server" containerID="cri-o://c8583c0d66e4a79dcf5605df70356896f577ca9dfb1ef4bbebd62aabfc59bffd" gracePeriod=2 Jan 22 12:00:33 crc kubenswrapper[5120]: I0122 12:00:33.746146 5120 generic.go:358] "Generic (PLEG): container finished" podID="8b5a6248-a718-4c8c-b2d8-26c979672691" containerID="e8419fe5302c5032adf86949d7fc07ce99ef94c635247658e099ceb729e4276a" exitCode=0 Jan 22 12:00:33 crc kubenswrapper[5120]: I0122 12:00:33.746223 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zkkb7" event={"ID":"8b5a6248-a718-4c8c-b2d8-26c979672691","Type":"ContainerDied","Data":"e8419fe5302c5032adf86949d7fc07ce99ef94c635247658e099ceb729e4276a"} Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.127783 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbnzxn" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.272980 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gppd2" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.277665 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6451a1e2-e63d-4a21-bab9-c97f9b2c9236-bundle\") pod \"6451a1e2-e63d-4a21-bab9-c97f9b2c9236\" (UID: \"6451a1e2-e63d-4a21-bab9-c97f9b2c9236\") " Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.277794 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6451a1e2-e63d-4a21-bab9-c97f9b2c9236-util\") pod \"6451a1e2-e63d-4a21-bab9-c97f9b2c9236\" (UID: \"6451a1e2-e63d-4a21-bab9-c97f9b2c9236\") " Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.277944 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5ddhn\" (UniqueName: \"kubernetes.io/projected/6451a1e2-e63d-4a21-bab9-c97f9b2c9236-kube-api-access-5ddhn\") pod \"6451a1e2-e63d-4a21-bab9-c97f9b2c9236\" (UID: \"6451a1e2-e63d-4a21-bab9-c97f9b2c9236\") " Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.278655 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6451a1e2-e63d-4a21-bab9-c97f9b2c9236-bundle" (OuterVolumeSpecName: "bundle") pod "6451a1e2-e63d-4a21-bab9-c97f9b2c9236" (UID: "6451a1e2-e63d-4a21-bab9-c97f9b2c9236"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.286517 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6451a1e2-e63d-4a21-bab9-c97f9b2c9236-util" (OuterVolumeSpecName: "util") pod "6451a1e2-e63d-4a21-bab9-c97f9b2c9236" (UID: "6451a1e2-e63d-4a21-bab9-c97f9b2c9236"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.289555 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6451a1e2-e63d-4a21-bab9-c97f9b2c9236-kube-api-access-5ddhn" (OuterVolumeSpecName: "kube-api-access-5ddhn") pod "6451a1e2-e63d-4a21-bab9-c97f9b2c9236" (UID: "6451a1e2-e63d-4a21-bab9-c97f9b2c9236"). InnerVolumeSpecName "kube-api-access-5ddhn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.379564 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23170abf-1fa3-4863-80e8-d7606fdeae60-catalog-content\") pod \"23170abf-1fa3-4863-80e8-d7606fdeae60\" (UID: \"23170abf-1fa3-4863-80e8-d7606fdeae60\") " Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.380229 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5npt\" (UniqueName: \"kubernetes.io/projected/23170abf-1fa3-4863-80e8-d7606fdeae60-kube-api-access-v5npt\") pod \"23170abf-1fa3-4863-80e8-d7606fdeae60\" (UID: \"23170abf-1fa3-4863-80e8-d7606fdeae60\") " Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.380403 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23170abf-1fa3-4863-80e8-d7606fdeae60-utilities\") pod \"23170abf-1fa3-4863-80e8-d7606fdeae60\" (UID: \"23170abf-1fa3-4863-80e8-d7606fdeae60\") " Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.380812 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5ddhn\" (UniqueName: \"kubernetes.io/projected/6451a1e2-e63d-4a21-bab9-c97f9b2c9236-kube-api-access-5ddhn\") on node \"crc\" DevicePath \"\"" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.380915 5120 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6451a1e2-e63d-4a21-bab9-c97f9b2c9236-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.381030 5120 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6451a1e2-e63d-4a21-bab9-c97f9b2c9236-util\") on node \"crc\" DevicePath \"\"" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.381582 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23170abf-1fa3-4863-80e8-d7606fdeae60-utilities" (OuterVolumeSpecName: "utilities") pod "23170abf-1fa3-4863-80e8-d7606fdeae60" (UID: "23170abf-1fa3-4863-80e8-d7606fdeae60"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.385330 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23170abf-1fa3-4863-80e8-d7606fdeae60-kube-api-access-v5npt" (OuterVolumeSpecName: "kube-api-access-v5npt") pod "23170abf-1fa3-4863-80e8-d7606fdeae60" (UID: "23170abf-1fa3-4863-80e8-d7606fdeae60"). InnerVolumeSpecName "kube-api-access-v5npt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.439784 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-kjb4b"] Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.440562 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6451a1e2-e63d-4a21-bab9-c97f9b2c9236" containerName="pull" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.440585 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="6451a1e2-e63d-4a21-bab9-c97f9b2c9236" containerName="pull" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.440602 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="23170abf-1fa3-4863-80e8-d7606fdeae60" containerName="registry-server" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.440609 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="23170abf-1fa3-4863-80e8-d7606fdeae60" containerName="registry-server" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.440623 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6451a1e2-e63d-4a21-bab9-c97f9b2c9236" containerName="extract" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.440628 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="6451a1e2-e63d-4a21-bab9-c97f9b2c9236" containerName="extract" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.440642 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6ae07b37-44a2-4e47-abb9-5587cb866c3b" containerName="pull" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.440647 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ae07b37-44a2-4e47-abb9-5587cb866c3b" containerName="pull" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.440655 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="23170abf-1fa3-4863-80e8-d7606fdeae60" containerName="extract-utilities" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.440661 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="23170abf-1fa3-4863-80e8-d7606fdeae60" containerName="extract-utilities" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.440669 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6ae07b37-44a2-4e47-abb9-5587cb866c3b" containerName="util" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.440676 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ae07b37-44a2-4e47-abb9-5587cb866c3b" containerName="util" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.440691 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6ae07b37-44a2-4e47-abb9-5587cb866c3b" containerName="extract" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.440697 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ae07b37-44a2-4e47-abb9-5587cb866c3b" containerName="extract" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.440708 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="23170abf-1fa3-4863-80e8-d7606fdeae60" containerName="extract-content" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.440714 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="23170abf-1fa3-4863-80e8-d7606fdeae60" containerName="extract-content" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.440729 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6451a1e2-e63d-4a21-bab9-c97f9b2c9236" containerName="util" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.440734 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="6451a1e2-e63d-4a21-bab9-c97f9b2c9236" containerName="util" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.440846 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="23170abf-1fa3-4863-80e8-d7606fdeae60" containerName="registry-server" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.440859 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="6ae07b37-44a2-4e47-abb9-5587cb866c3b" containerName="extract" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.440872 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="6451a1e2-e63d-4a21-bab9-c97f9b2c9236" containerName="extract" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.482831 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-v5npt\" (UniqueName: \"kubernetes.io/projected/23170abf-1fa3-4863-80e8-d7606fdeae60-kube-api-access-v5npt\") on node \"crc\" DevicePath \"\"" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.482868 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23170abf-1fa3-4863-80e8-d7606fdeae60-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.496282 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23170abf-1fa3-4863-80e8-d7606fdeae60-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "23170abf-1fa3-4863-80e8-d7606fdeae60" (UID: "23170abf-1fa3-4863-80e8-d7606fdeae60"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.583787 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23170abf-1fa3-4863-80e8-d7606fdeae60-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.700170 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-kjb4b"] Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.700414 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-kjb4b" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.700698 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7f6bdccb4-kw6h7"] Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.706321 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-dockercfg-d6h5d\"" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.707019 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"openshift-service-ca.crt\"" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.707860 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7f6bdccb4-78zjb"] Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.708188 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f6bdccb4-kw6h7" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.712247 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7f6bdccb4-kw6h7"] Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.712308 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7f6bdccb4-78zjb"] Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.712386 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f6bdccb4-78zjb" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.714082 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-dockercfg-r9tgh\"" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.714804 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-service-cert\"" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.723726 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"kube-root-ca.crt\"" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.768233 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zkkb7" event={"ID":"8b5a6248-a718-4c8c-b2d8-26c979672691","Type":"ContainerStarted","Data":"d53b338867d5bdf729dd622fdd10987f206d0753cdfe93d718d86426f2aed315"} Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.771055 5120 generic.go:358] "Generic (PLEG): container finished" podID="5915ccea-14c1-48c1-8e09-9cc508bb150e" containerID="2230e816d937f4b0f1d284a8c7efbd0a7ba111f1bf9693e2f9b6418177a7f0bd" exitCode=0 Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.771135 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87lvz" event={"ID":"5915ccea-14c1-48c1-8e09-9cc508bb150e","Type":"ContainerDied","Data":"2230e816d937f4b0f1d284a8c7efbd0a7ba111f1bf9693e2f9b6418177a7f0bd"} Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.777235 5120 generic.go:358] "Generic (PLEG): container finished" podID="23170abf-1fa3-4863-80e8-d7606fdeae60" containerID="c8583c0d66e4a79dcf5605df70356896f577ca9dfb1ef4bbebd62aabfc59bffd" exitCode=0 Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.777308 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gppd2" event={"ID":"23170abf-1fa3-4863-80e8-d7606fdeae60","Type":"ContainerDied","Data":"c8583c0d66e4a79dcf5605df70356896f577ca9dfb1ef4bbebd62aabfc59bffd"} Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.777329 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gppd2" event={"ID":"23170abf-1fa3-4863-80e8-d7606fdeae60","Type":"ContainerDied","Data":"95dee903a35163143fb71dae252bdc46fab906f21721e1c598215d1ffc26c24e"} Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.777351 5120 scope.go:117] "RemoveContainer" containerID="c8583c0d66e4a79dcf5605df70356896f577ca9dfb1ef4bbebd62aabfc59bffd" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.777376 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gppd2" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.783369 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbnzxn" event={"ID":"6451a1e2-e63d-4a21-bab9-c97f9b2c9236","Type":"ContainerDied","Data":"ce9e278098fe76d57c98f8549cea11c041bae3dca21cc3da02281b6c0192fbf5"} Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.783395 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce9e278098fe76d57c98f8549cea11c041bae3dca21cc3da02281b6c0192fbf5" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.783485 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbnzxn" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.798711 5120 scope.go:117] "RemoveContainer" containerID="66e3ce0d5b91442255cccccbc69a52d9b1cd60932f6ea9f75d5c1e6c0d86293b" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.819054 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zkkb7" podStartSLOduration=6.097870905 podStartE2EDuration="6.819033305s" podCreationTimestamp="2026-01-22 12:00:28 +0000 UTC" firstStartedPulling="2026-01-22 12:00:30.687820996 +0000 UTC m=+765.431769327" lastFinishedPulling="2026-01-22 12:00:31.408983386 +0000 UTC m=+766.152931727" observedRunningTime="2026-01-22 12:00:34.814835683 +0000 UTC m=+769.558784024" watchObservedRunningTime="2026-01-22 12:00:34.819033305 +0000 UTC m=+769.562981646" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.856808 5120 scope.go:117] "RemoveContainer" containerID="c96918bf02933a8bbbdc17083708eca8a326f1b2c1a370b9d5b4d24b8940218e" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.861219 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gppd2"] Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.879234 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gppd2"] Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.888461 5120 scope.go:117] "RemoveContainer" containerID="c8583c0d66e4a79dcf5605df70356896f577ca9dfb1ef4bbebd62aabfc59bffd" Jan 22 12:00:34 crc kubenswrapper[5120]: E0122 12:00:34.890096 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8583c0d66e4a79dcf5605df70356896f577ca9dfb1ef4bbebd62aabfc59bffd\": container with ID starting with c8583c0d66e4a79dcf5605df70356896f577ca9dfb1ef4bbebd62aabfc59bffd not found: ID does not exist" containerID="c8583c0d66e4a79dcf5605df70356896f577ca9dfb1ef4bbebd62aabfc59bffd" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.890163 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8583c0d66e4a79dcf5605df70356896f577ca9dfb1ef4bbebd62aabfc59bffd"} err="failed to get container status \"c8583c0d66e4a79dcf5605df70356896f577ca9dfb1ef4bbebd62aabfc59bffd\": rpc error: code = NotFound desc = could not find container \"c8583c0d66e4a79dcf5605df70356896f577ca9dfb1ef4bbebd62aabfc59bffd\": container with ID starting with c8583c0d66e4a79dcf5605df70356896f577ca9dfb1ef4bbebd62aabfc59bffd not found: ID does not exist" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.890196 5120 scope.go:117] "RemoveContainer" containerID="66e3ce0d5b91442255cccccbc69a52d9b1cd60932f6ea9f75d5c1e6c0d86293b" Jan 22 12:00:34 crc kubenswrapper[5120]: E0122 12:00:34.890492 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66e3ce0d5b91442255cccccbc69a52d9b1cd60932f6ea9f75d5c1e6c0d86293b\": container with ID starting with 66e3ce0d5b91442255cccccbc69a52d9b1cd60932f6ea9f75d5c1e6c0d86293b not found: ID does not exist" containerID="66e3ce0d5b91442255cccccbc69a52d9b1cd60932f6ea9f75d5c1e6c0d86293b" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.890507 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66e3ce0d5b91442255cccccbc69a52d9b1cd60932f6ea9f75d5c1e6c0d86293b"} err="failed to get container status \"66e3ce0d5b91442255cccccbc69a52d9b1cd60932f6ea9f75d5c1e6c0d86293b\": rpc error: code = NotFound desc = could not find container \"66e3ce0d5b91442255cccccbc69a52d9b1cd60932f6ea9f75d5c1e6c0d86293b\": container with ID starting with 66e3ce0d5b91442255cccccbc69a52d9b1cd60932f6ea9f75d5c1e6c0d86293b not found: ID does not exist" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.890523 5120 scope.go:117] "RemoveContainer" containerID="c96918bf02933a8bbbdc17083708eca8a326f1b2c1a370b9d5b4d24b8940218e" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.890637 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtq6d\" (UniqueName: \"kubernetes.io/projected/6f74f225-731c-48b9-a98d-36a191b5ff41-kube-api-access-xtq6d\") pod \"obo-prometheus-operator-9bc85b4bf-kjb4b\" (UID: \"6f74f225-731c-48b9-a98d-36a191b5ff41\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-kjb4b" Jan 22 12:00:34 crc kubenswrapper[5120]: E0122 12:00:34.890678 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c96918bf02933a8bbbdc17083708eca8a326f1b2c1a370b9d5b4d24b8940218e\": container with ID starting with c96918bf02933a8bbbdc17083708eca8a326f1b2c1a370b9d5b4d24b8940218e not found: ID does not exist" containerID="c96918bf02933a8bbbdc17083708eca8a326f1b2c1a370b9d5b4d24b8940218e" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.890696 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c96918bf02933a8bbbdc17083708eca8a326f1b2c1a370b9d5b4d24b8940218e"} err="failed to get container status \"c96918bf02933a8bbbdc17083708eca8a326f1b2c1a370b9d5b4d24b8940218e\": rpc error: code = NotFound desc = could not find container \"c96918bf02933a8bbbdc17083708eca8a326f1b2c1a370b9d5b4d24b8940218e\": container with ID starting with c96918bf02933a8bbbdc17083708eca8a326f1b2c1a370b9d5b4d24b8940218e not found: ID does not exist" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.890676 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2e68b911-b2b1-4a04-a86f-91742f22bad9-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f6bdccb4-78zjb\" (UID: \"2e68b911-b2b1-4a04-a86f-91742f22bad9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f6bdccb4-78zjb" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.890983 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6924228f-579c-408a-8a40-b103b066446d-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f6bdccb4-kw6h7\" (UID: \"6924228f-579c-408a-8a40-b103b066446d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f6bdccb4-kw6h7" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.891042 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2e68b911-b2b1-4a04-a86f-91742f22bad9-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f6bdccb4-78zjb\" (UID: \"2e68b911-b2b1-4a04-a86f-91742f22bad9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f6bdccb4-78zjb" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.891072 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6924228f-579c-408a-8a40-b103b066446d-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f6bdccb4-kw6h7\" (UID: \"6924228f-579c-408a-8a40-b103b066446d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f6bdccb4-kw6h7" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.896507 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-85c68dddb-s6759"] Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.921869 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-85c68dddb-s6759"] Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.922122 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-85c68dddb-s6759" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.926423 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-tls\"" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.926563 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-sa-dockercfg-k5xkx\"" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.993132 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6924228f-579c-408a-8a40-b103b066446d-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f6bdccb4-kw6h7\" (UID: \"6924228f-579c-408a-8a40-b103b066446d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f6bdccb4-kw6h7" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.993225 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2e68b911-b2b1-4a04-a86f-91742f22bad9-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f6bdccb4-78zjb\" (UID: \"2e68b911-b2b1-4a04-a86f-91742f22bad9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f6bdccb4-78zjb" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.993257 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6924228f-579c-408a-8a40-b103b066446d-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f6bdccb4-kw6h7\" (UID: \"6924228f-579c-408a-8a40-b103b066446d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f6bdccb4-kw6h7" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.993345 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xtq6d\" (UniqueName: \"kubernetes.io/projected/6f74f225-731c-48b9-a98d-36a191b5ff41-kube-api-access-xtq6d\") pod \"obo-prometheus-operator-9bc85b4bf-kjb4b\" (UID: \"6f74f225-731c-48b9-a98d-36a191b5ff41\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-kjb4b" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.993372 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2e68b911-b2b1-4a04-a86f-91742f22bad9-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f6bdccb4-78zjb\" (UID: \"2e68b911-b2b1-4a04-a86f-91742f22bad9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f6bdccb4-78zjb" Jan 22 12:00:35 crc kubenswrapper[5120]: I0122 12:00:35.001254 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2e68b911-b2b1-4a04-a86f-91742f22bad9-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f6bdccb4-78zjb\" (UID: \"2e68b911-b2b1-4a04-a86f-91742f22bad9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f6bdccb4-78zjb" Jan 22 12:00:35 crc kubenswrapper[5120]: I0122 12:00:35.001262 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6924228f-579c-408a-8a40-b103b066446d-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f6bdccb4-kw6h7\" (UID: \"6924228f-579c-408a-8a40-b103b066446d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f6bdccb4-kw6h7" Jan 22 12:00:35 crc kubenswrapper[5120]: I0122 12:00:35.002387 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6924228f-579c-408a-8a40-b103b066446d-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f6bdccb4-kw6h7\" (UID: \"6924228f-579c-408a-8a40-b103b066446d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f6bdccb4-kw6h7" Jan 22 12:00:35 crc kubenswrapper[5120]: I0122 12:00:35.003313 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2e68b911-b2b1-4a04-a86f-91742f22bad9-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f6bdccb4-78zjb\" (UID: \"2e68b911-b2b1-4a04-a86f-91742f22bad9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f6bdccb4-78zjb" Jan 22 12:00:35 crc kubenswrapper[5120]: I0122 12:00:35.031838 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtq6d\" (UniqueName: \"kubernetes.io/projected/6f74f225-731c-48b9-a98d-36a191b5ff41-kube-api-access-xtq6d\") pod \"obo-prometheus-operator-9bc85b4bf-kjb4b\" (UID: \"6f74f225-731c-48b9-a98d-36a191b5ff41\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-kjb4b" Jan 22 12:00:35 crc kubenswrapper[5120]: I0122 12:00:35.038369 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f6bdccb4-kw6h7" Jan 22 12:00:35 crc kubenswrapper[5120]: I0122 12:00:35.046489 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f6bdccb4-78zjb" Jan 22 12:00:35 crc kubenswrapper[5120]: I0122 12:00:35.076083 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-n9lhg"] Jan 22 12:00:35 crc kubenswrapper[5120]: I0122 12:00:35.094602 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-n9lhg"] Jan 22 12:00:35 crc kubenswrapper[5120]: I0122 12:00:35.094835 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-669c9f96b5-n9lhg" Jan 22 12:00:35 crc kubenswrapper[5120]: I0122 12:00:35.095343 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9brd\" (UniqueName: \"kubernetes.io/projected/da59fdd4-fe7a-4efd-b136-79a9b05d38b8-kube-api-access-d9brd\") pod \"observability-operator-85c68dddb-s6759\" (UID: \"da59fdd4-fe7a-4efd-b136-79a9b05d38b8\") " pod="openshift-operators/observability-operator-85c68dddb-s6759" Jan 22 12:00:35 crc kubenswrapper[5120]: I0122 12:00:35.095438 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/da59fdd4-fe7a-4efd-b136-79a9b05d38b8-observability-operator-tls\") pod \"observability-operator-85c68dddb-s6759\" (UID: \"da59fdd4-fe7a-4efd-b136-79a9b05d38b8\") " pod="openshift-operators/observability-operator-85c68dddb-s6759" Jan 22 12:00:35 crc kubenswrapper[5120]: I0122 12:00:35.099051 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"perses-operator-dockercfg-k442f\"" Jan 22 12:00:35 crc kubenswrapper[5120]: I0122 12:00:35.197912 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/da59fdd4-fe7a-4efd-b136-79a9b05d38b8-observability-operator-tls\") pod \"observability-operator-85c68dddb-s6759\" (UID: \"da59fdd4-fe7a-4efd-b136-79a9b05d38b8\") " pod="openshift-operators/observability-operator-85c68dddb-s6759" Jan 22 12:00:35 crc kubenswrapper[5120]: I0122 12:00:35.198505 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/da376ee2-11ae-493e-9e4d-d8ac6fadfb53-openshift-service-ca\") pod \"perses-operator-669c9f96b5-n9lhg\" (UID: \"da376ee2-11ae-493e-9e4d-d8ac6fadfb53\") " pod="openshift-operators/perses-operator-669c9f96b5-n9lhg" Jan 22 12:00:35 crc kubenswrapper[5120]: I0122 12:00:35.198536 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8x8h\" (UniqueName: \"kubernetes.io/projected/da376ee2-11ae-493e-9e4d-d8ac6fadfb53-kube-api-access-h8x8h\") pod \"perses-operator-669c9f96b5-n9lhg\" (UID: \"da376ee2-11ae-493e-9e4d-d8ac6fadfb53\") " pod="openshift-operators/perses-operator-669c9f96b5-n9lhg" Jan 22 12:00:35 crc kubenswrapper[5120]: I0122 12:00:35.198661 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d9brd\" (UniqueName: \"kubernetes.io/projected/da59fdd4-fe7a-4efd-b136-79a9b05d38b8-kube-api-access-d9brd\") pod \"observability-operator-85c68dddb-s6759\" (UID: \"da59fdd4-fe7a-4efd-b136-79a9b05d38b8\") " pod="openshift-operators/observability-operator-85c68dddb-s6759" Jan 22 12:00:35 crc kubenswrapper[5120]: I0122 12:00:35.205235 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/da59fdd4-fe7a-4efd-b136-79a9b05d38b8-observability-operator-tls\") pod \"observability-operator-85c68dddb-s6759\" (UID: \"da59fdd4-fe7a-4efd-b136-79a9b05d38b8\") " pod="openshift-operators/observability-operator-85c68dddb-s6759" Jan 22 12:00:35 crc kubenswrapper[5120]: I0122 12:00:35.232632 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9brd\" (UniqueName: \"kubernetes.io/projected/da59fdd4-fe7a-4efd-b136-79a9b05d38b8-kube-api-access-d9brd\") pod \"observability-operator-85c68dddb-s6759\" (UID: \"da59fdd4-fe7a-4efd-b136-79a9b05d38b8\") " pod="openshift-operators/observability-operator-85c68dddb-s6759" Jan 22 12:00:35 crc kubenswrapper[5120]: I0122 12:00:35.241075 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-85c68dddb-s6759" Jan 22 12:00:35 crc kubenswrapper[5120]: I0122 12:00:35.305820 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/da376ee2-11ae-493e-9e4d-d8ac6fadfb53-openshift-service-ca\") pod \"perses-operator-669c9f96b5-n9lhg\" (UID: \"da376ee2-11ae-493e-9e4d-d8ac6fadfb53\") " pod="openshift-operators/perses-operator-669c9f96b5-n9lhg" Jan 22 12:00:35 crc kubenswrapper[5120]: I0122 12:00:35.305904 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-h8x8h\" (UniqueName: \"kubernetes.io/projected/da376ee2-11ae-493e-9e4d-d8ac6fadfb53-kube-api-access-h8x8h\") pod \"perses-operator-669c9f96b5-n9lhg\" (UID: \"da376ee2-11ae-493e-9e4d-d8ac6fadfb53\") " pod="openshift-operators/perses-operator-669c9f96b5-n9lhg" Jan 22 12:00:35 crc kubenswrapper[5120]: I0122 12:00:35.307506 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/da376ee2-11ae-493e-9e4d-d8ac6fadfb53-openshift-service-ca\") pod \"perses-operator-669c9f96b5-n9lhg\" (UID: \"da376ee2-11ae-493e-9e4d-d8ac6fadfb53\") " pod="openshift-operators/perses-operator-669c9f96b5-n9lhg" Jan 22 12:00:35 crc kubenswrapper[5120]: I0122 12:00:35.328273 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-kjb4b" Jan 22 12:00:35 crc kubenswrapper[5120]: I0122 12:00:35.333177 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8x8h\" (UniqueName: \"kubernetes.io/projected/da376ee2-11ae-493e-9e4d-d8ac6fadfb53-kube-api-access-h8x8h\") pod \"perses-operator-669c9f96b5-n9lhg\" (UID: \"da376ee2-11ae-493e-9e4d-d8ac6fadfb53\") " pod="openshift-operators/perses-operator-669c9f96b5-n9lhg" Jan 22 12:00:35 crc kubenswrapper[5120]: I0122 12:00:35.421278 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-669c9f96b5-n9lhg" Jan 22 12:00:35 crc kubenswrapper[5120]: I0122 12:00:35.460981 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7f6bdccb4-kw6h7"] Jan 22 12:00:35 crc kubenswrapper[5120]: W0122 12:00:35.472342 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6924228f_579c_408a_8a40_b103b066446d.slice/crio-7eac22ecd403316ce17ef69f88757e0edcbf344ccb9f22fd5f70321684c02631 WatchSource:0}: Error finding container 7eac22ecd403316ce17ef69f88757e0edcbf344ccb9f22fd5f70321684c02631: Status 404 returned error can't find the container with id 7eac22ecd403316ce17ef69f88757e0edcbf344ccb9f22fd5f70321684c02631 Jan 22 12:00:35 crc kubenswrapper[5120]: I0122 12:00:35.523611 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7f6bdccb4-78zjb"] Jan 22 12:00:35 crc kubenswrapper[5120]: W0122 12:00:35.557136 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e68b911_b2b1_4a04_a86f_91742f22bad9.slice/crio-dbe5f4e426824e4c95c4f1f8bfb0a8459f84f8dad672541dc5bb19ab4d2396cd WatchSource:0}: Error finding container dbe5f4e426824e4c95c4f1f8bfb0a8459f84f8dad672541dc5bb19ab4d2396cd: Status 404 returned error can't find the container with id dbe5f4e426824e4c95c4f1f8bfb0a8459f84f8dad672541dc5bb19ab4d2396cd Jan 22 12:00:35 crc kubenswrapper[5120]: I0122 12:00:35.582287 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23170abf-1fa3-4863-80e8-d7606fdeae60" path="/var/lib/kubelet/pods/23170abf-1fa3-4863-80e8-d7606fdeae60/volumes" Jan 22 12:00:35 crc kubenswrapper[5120]: I0122 12:00:35.799113 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f6bdccb4-kw6h7" event={"ID":"6924228f-579c-408a-8a40-b103b066446d","Type":"ContainerStarted","Data":"7eac22ecd403316ce17ef69f88757e0edcbf344ccb9f22fd5f70321684c02631"} Jan 22 12:00:35 crc kubenswrapper[5120]: I0122 12:00:35.802238 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f6bdccb4-78zjb" event={"ID":"2e68b911-b2b1-4a04-a86f-91742f22bad9","Type":"ContainerStarted","Data":"dbe5f4e426824e4c95c4f1f8bfb0a8459f84f8dad672541dc5bb19ab4d2396cd"} Jan 22 12:00:35 crc kubenswrapper[5120]: I0122 12:00:35.851464 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-kjb4b"] Jan 22 12:00:35 crc kubenswrapper[5120]: I0122 12:00:35.865082 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-85c68dddb-s6759"] Jan 22 12:00:35 crc kubenswrapper[5120]: W0122 12:00:35.871894 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f74f225_731c_48b9_a98d_36a191b5ff41.slice/crio-9b39ee2c2388a48eca1a17ae7985a3d3df8bfe0594be7ecdb12aa335443882a4 WatchSource:0}: Error finding container 9b39ee2c2388a48eca1a17ae7985a3d3df8bfe0594be7ecdb12aa335443882a4: Status 404 returned error can't find the container with id 9b39ee2c2388a48eca1a17ae7985a3d3df8bfe0594be7ecdb12aa335443882a4 Jan 22 12:00:35 crc kubenswrapper[5120]: W0122 12:00:35.877209 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda59fdd4_fe7a_4efd_b136_79a9b05d38b8.slice/crio-7deff7a19d8403223806e1c06dff129d5801b8ca71d739b85eeeae458aff43b5 WatchSource:0}: Error finding container 7deff7a19d8403223806e1c06dff129d5801b8ca71d739b85eeeae458aff43b5: Status 404 returned error can't find the container with id 7deff7a19d8403223806e1c06dff129d5801b8ca71d739b85eeeae458aff43b5 Jan 22 12:00:35 crc kubenswrapper[5120]: I0122 12:00:35.951773 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-n9lhg"] Jan 22 12:00:35 crc kubenswrapper[5120]: W0122 12:00:35.956787 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda376ee2_11ae_493e_9e4d_d8ac6fadfb53.slice/crio-516239d3df01ec41ea98d35b66c832b8c2fd0d37be57343861c4779017db0c60 WatchSource:0}: Error finding container 516239d3df01ec41ea98d35b66c832b8c2fd0d37be57343861c4779017db0c60: Status 404 returned error can't find the container with id 516239d3df01ec41ea98d35b66c832b8c2fd0d37be57343861c4779017db0c60 Jan 22 12:00:36 crc kubenswrapper[5120]: I0122 12:00:36.812085 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-85c68dddb-s6759" event={"ID":"da59fdd4-fe7a-4efd-b136-79a9b05d38b8","Type":"ContainerStarted","Data":"7deff7a19d8403223806e1c06dff129d5801b8ca71d739b85eeeae458aff43b5"} Jan 22 12:00:36 crc kubenswrapper[5120]: I0122 12:00:36.823435 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-669c9f96b5-n9lhg" event={"ID":"da376ee2-11ae-493e-9e4d-d8ac6fadfb53","Type":"ContainerStarted","Data":"516239d3df01ec41ea98d35b66c832b8c2fd0d37be57343861c4779017db0c60"} Jan 22 12:00:36 crc kubenswrapper[5120]: I0122 12:00:36.825154 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-kjb4b" event={"ID":"6f74f225-731c-48b9-a98d-36a191b5ff41","Type":"ContainerStarted","Data":"9b39ee2c2388a48eca1a17ae7985a3d3df8bfe0594be7ecdb12aa335443882a4"} Jan 22 12:00:39 crc kubenswrapper[5120]: I0122 12:00:39.388821 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zkkb7" Jan 22 12:00:39 crc kubenswrapper[5120]: I0122 12:00:39.389887 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-zkkb7" Jan 22 12:00:39 crc kubenswrapper[5120]: I0122 12:00:39.487855 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zkkb7" Jan 22 12:00:39 crc kubenswrapper[5120]: I0122 12:00:39.920831 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zkkb7" Jan 22 12:00:40 crc kubenswrapper[5120]: I0122 12:00:40.764496 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elastic-operator-796f77fbdf-t9sbr"] Jan 22 12:00:40 crc kubenswrapper[5120]: I0122 12:00:40.835680 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-796f77fbdf-t9sbr"] Jan 22 12:00:40 crc kubenswrapper[5120]: I0122 12:00:40.835878 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-796f77fbdf-t9sbr" Jan 22 12:00:40 crc kubenswrapper[5120]: I0122 12:00:40.841721 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-service-cert\"" Jan 22 12:00:40 crc kubenswrapper[5120]: I0122 12:00:40.841924 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-dockercfg-r4sfd\"" Jan 22 12:00:40 crc kubenswrapper[5120]: I0122 12:00:40.842127 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"openshift-service-ca.crt\"" Jan 22 12:00:40 crc kubenswrapper[5120]: I0122 12:00:40.842308 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"kube-root-ca.crt\"" Jan 22 12:00:40 crc kubenswrapper[5120]: I0122 12:00:40.974527 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/164c4d54-e519-4e1e-9e4b-3e2881312d55-webhook-cert\") pod \"elastic-operator-796f77fbdf-t9sbr\" (UID: \"164c4d54-e519-4e1e-9e4b-3e2881312d55\") " pod="service-telemetry/elastic-operator-796f77fbdf-t9sbr" Jan 22 12:00:40 crc kubenswrapper[5120]: I0122 12:00:40.974911 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/164c4d54-e519-4e1e-9e4b-3e2881312d55-apiservice-cert\") pod \"elastic-operator-796f77fbdf-t9sbr\" (UID: \"164c4d54-e519-4e1e-9e4b-3e2881312d55\") " pod="service-telemetry/elastic-operator-796f77fbdf-t9sbr" Jan 22 12:00:40 crc kubenswrapper[5120]: I0122 12:00:40.975109 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4989\" (UniqueName: \"kubernetes.io/projected/164c4d54-e519-4e1e-9e4b-3e2881312d55-kube-api-access-j4989\") pod \"elastic-operator-796f77fbdf-t9sbr\" (UID: \"164c4d54-e519-4e1e-9e4b-3e2881312d55\") " pod="service-telemetry/elastic-operator-796f77fbdf-t9sbr" Jan 22 12:00:41 crc kubenswrapper[5120]: I0122 12:00:41.076778 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-j4989\" (UniqueName: \"kubernetes.io/projected/164c4d54-e519-4e1e-9e4b-3e2881312d55-kube-api-access-j4989\") pod \"elastic-operator-796f77fbdf-t9sbr\" (UID: \"164c4d54-e519-4e1e-9e4b-3e2881312d55\") " pod="service-telemetry/elastic-operator-796f77fbdf-t9sbr" Jan 22 12:00:41 crc kubenswrapper[5120]: I0122 12:00:41.076937 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/164c4d54-e519-4e1e-9e4b-3e2881312d55-webhook-cert\") pod \"elastic-operator-796f77fbdf-t9sbr\" (UID: \"164c4d54-e519-4e1e-9e4b-3e2881312d55\") " pod="service-telemetry/elastic-operator-796f77fbdf-t9sbr" Jan 22 12:00:41 crc kubenswrapper[5120]: I0122 12:00:41.079534 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/164c4d54-e519-4e1e-9e4b-3e2881312d55-apiservice-cert\") pod \"elastic-operator-796f77fbdf-t9sbr\" (UID: \"164c4d54-e519-4e1e-9e4b-3e2881312d55\") " pod="service-telemetry/elastic-operator-796f77fbdf-t9sbr" Jan 22 12:00:41 crc kubenswrapper[5120]: I0122 12:00:41.089428 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/164c4d54-e519-4e1e-9e4b-3e2881312d55-apiservice-cert\") pod \"elastic-operator-796f77fbdf-t9sbr\" (UID: \"164c4d54-e519-4e1e-9e4b-3e2881312d55\") " pod="service-telemetry/elastic-operator-796f77fbdf-t9sbr" Jan 22 12:00:41 crc kubenswrapper[5120]: I0122 12:00:41.098621 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/164c4d54-e519-4e1e-9e4b-3e2881312d55-webhook-cert\") pod \"elastic-operator-796f77fbdf-t9sbr\" (UID: \"164c4d54-e519-4e1e-9e4b-3e2881312d55\") " pod="service-telemetry/elastic-operator-796f77fbdf-t9sbr" Jan 22 12:00:41 crc kubenswrapper[5120]: I0122 12:00:41.156920 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4989\" (UniqueName: \"kubernetes.io/projected/164c4d54-e519-4e1e-9e4b-3e2881312d55-kube-api-access-j4989\") pod \"elastic-operator-796f77fbdf-t9sbr\" (UID: \"164c4d54-e519-4e1e-9e4b-3e2881312d55\") " pod="service-telemetry/elastic-operator-796f77fbdf-t9sbr" Jan 22 12:00:41 crc kubenswrapper[5120]: I0122 12:00:41.165463 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-796f77fbdf-t9sbr" Jan 22 12:00:41 crc kubenswrapper[5120]: I0122 12:00:41.421271 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zkkb7"] Jan 22 12:00:41 crc kubenswrapper[5120]: I0122 12:00:41.905093 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zkkb7" podUID="8b5a6248-a718-4c8c-b2d8-26c979672691" containerName="registry-server" containerID="cri-o://d53b338867d5bdf729dd622fdd10987f206d0753cdfe93d718d86426f2aed315" gracePeriod=2 Jan 22 12:00:42 crc kubenswrapper[5120]: I0122 12:00:42.271639 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-sd4wv"] Jan 22 12:00:42 crc kubenswrapper[5120]: I0122 12:00:42.537770 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-sd4wv"] Jan 22 12:00:42 crc kubenswrapper[5120]: I0122 12:00:42.538107 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-78b9bd8798-sd4wv" Jan 22 12:00:42 crc kubenswrapper[5120]: I0122 12:00:42.542236 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"interconnect-operator-dockercfg-jwlzv\"" Jan 22 12:00:42 crc kubenswrapper[5120]: I0122 12:00:42.706082 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjwbb\" (UniqueName: \"kubernetes.io/projected/b6e8a299-2880-4236-8f8b-b6983db7ed96-kube-api-access-zjwbb\") pod \"interconnect-operator-78b9bd8798-sd4wv\" (UID: \"b6e8a299-2880-4236-8f8b-b6983db7ed96\") " pod="service-telemetry/interconnect-operator-78b9bd8798-sd4wv" Jan 22 12:00:42 crc kubenswrapper[5120]: I0122 12:00:42.809853 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zjwbb\" (UniqueName: \"kubernetes.io/projected/b6e8a299-2880-4236-8f8b-b6983db7ed96-kube-api-access-zjwbb\") pod \"interconnect-operator-78b9bd8798-sd4wv\" (UID: \"b6e8a299-2880-4236-8f8b-b6983db7ed96\") " pod="service-telemetry/interconnect-operator-78b9bd8798-sd4wv" Jan 22 12:00:42 crc kubenswrapper[5120]: I0122 12:00:42.844006 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjwbb\" (UniqueName: \"kubernetes.io/projected/b6e8a299-2880-4236-8f8b-b6983db7ed96-kube-api-access-zjwbb\") pod \"interconnect-operator-78b9bd8798-sd4wv\" (UID: \"b6e8a299-2880-4236-8f8b-b6983db7ed96\") " pod="service-telemetry/interconnect-operator-78b9bd8798-sd4wv" Jan 22 12:00:42 crc kubenswrapper[5120]: I0122 12:00:42.932310 5120 generic.go:358] "Generic (PLEG): container finished" podID="8b5a6248-a718-4c8c-b2d8-26c979672691" containerID="d53b338867d5bdf729dd622fdd10987f206d0753cdfe93d718d86426f2aed315" exitCode=0 Jan 22 12:00:42 crc kubenswrapper[5120]: I0122 12:00:42.933008 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zkkb7" event={"ID":"8b5a6248-a718-4c8c-b2d8-26c979672691","Type":"ContainerDied","Data":"d53b338867d5bdf729dd622fdd10987f206d0753cdfe93d718d86426f2aed315"} Jan 22 12:00:42 crc kubenswrapper[5120]: I0122 12:00:42.934478 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-78b9bd8798-sd4wv" Jan 22 12:00:44 crc kubenswrapper[5120]: I0122 12:00:44.835795 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dxmrl"] Jan 22 12:00:44 crc kubenswrapper[5120]: I0122 12:00:44.880357 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dxmrl"] Jan 22 12:00:44 crc kubenswrapper[5120]: I0122 12:00:44.880642 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dxmrl" Jan 22 12:00:44 crc kubenswrapper[5120]: I0122 12:00:44.957303 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvzbq\" (UniqueName: \"kubernetes.io/projected/cb084ddd-669f-4358-a97d-4f3a5ba9fae7-kube-api-access-xvzbq\") pod \"community-operators-dxmrl\" (UID: \"cb084ddd-669f-4358-a97d-4f3a5ba9fae7\") " pod="openshift-marketplace/community-operators-dxmrl" Jan 22 12:00:44 crc kubenswrapper[5120]: I0122 12:00:44.957383 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb084ddd-669f-4358-a97d-4f3a5ba9fae7-utilities\") pod \"community-operators-dxmrl\" (UID: \"cb084ddd-669f-4358-a97d-4f3a5ba9fae7\") " pod="openshift-marketplace/community-operators-dxmrl" Jan 22 12:00:44 crc kubenswrapper[5120]: I0122 12:00:44.957462 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb084ddd-669f-4358-a97d-4f3a5ba9fae7-catalog-content\") pod \"community-operators-dxmrl\" (UID: \"cb084ddd-669f-4358-a97d-4f3a5ba9fae7\") " pod="openshift-marketplace/community-operators-dxmrl" Jan 22 12:00:45 crc kubenswrapper[5120]: I0122 12:00:45.058989 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb084ddd-669f-4358-a97d-4f3a5ba9fae7-catalog-content\") pod \"community-operators-dxmrl\" (UID: \"cb084ddd-669f-4358-a97d-4f3a5ba9fae7\") " pod="openshift-marketplace/community-operators-dxmrl" Jan 22 12:00:45 crc kubenswrapper[5120]: I0122 12:00:45.059125 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xvzbq\" (UniqueName: \"kubernetes.io/projected/cb084ddd-669f-4358-a97d-4f3a5ba9fae7-kube-api-access-xvzbq\") pod \"community-operators-dxmrl\" (UID: \"cb084ddd-669f-4358-a97d-4f3a5ba9fae7\") " pod="openshift-marketplace/community-operators-dxmrl" Jan 22 12:00:45 crc kubenswrapper[5120]: I0122 12:00:45.059162 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb084ddd-669f-4358-a97d-4f3a5ba9fae7-utilities\") pod \"community-operators-dxmrl\" (UID: \"cb084ddd-669f-4358-a97d-4f3a5ba9fae7\") " pod="openshift-marketplace/community-operators-dxmrl" Jan 22 12:00:45 crc kubenswrapper[5120]: I0122 12:00:45.059740 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb084ddd-669f-4358-a97d-4f3a5ba9fae7-utilities\") pod \"community-operators-dxmrl\" (UID: \"cb084ddd-669f-4358-a97d-4f3a5ba9fae7\") " pod="openshift-marketplace/community-operators-dxmrl" Jan 22 12:00:45 crc kubenswrapper[5120]: I0122 12:00:45.060033 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb084ddd-669f-4358-a97d-4f3a5ba9fae7-catalog-content\") pod \"community-operators-dxmrl\" (UID: \"cb084ddd-669f-4358-a97d-4f3a5ba9fae7\") " pod="openshift-marketplace/community-operators-dxmrl" Jan 22 12:00:45 crc kubenswrapper[5120]: I0122 12:00:45.102993 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvzbq\" (UniqueName: \"kubernetes.io/projected/cb084ddd-669f-4358-a97d-4f3a5ba9fae7-kube-api-access-xvzbq\") pod \"community-operators-dxmrl\" (UID: \"cb084ddd-669f-4358-a97d-4f3a5ba9fae7\") " pod="openshift-marketplace/community-operators-dxmrl" Jan 22 12:00:45 crc kubenswrapper[5120]: I0122 12:00:45.204386 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dxmrl" Jan 22 12:00:45 crc kubenswrapper[5120]: I0122 12:00:45.386683 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zkkb7" Jan 22 12:00:45 crc kubenswrapper[5120]: I0122 12:00:45.463826 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b5a6248-a718-4c8c-b2d8-26c979672691-utilities\") pod \"8b5a6248-a718-4c8c-b2d8-26c979672691\" (UID: \"8b5a6248-a718-4c8c-b2d8-26c979672691\") " Jan 22 12:00:45 crc kubenswrapper[5120]: I0122 12:00:45.463996 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4wpw2\" (UniqueName: \"kubernetes.io/projected/8b5a6248-a718-4c8c-b2d8-26c979672691-kube-api-access-4wpw2\") pod \"8b5a6248-a718-4c8c-b2d8-26c979672691\" (UID: \"8b5a6248-a718-4c8c-b2d8-26c979672691\") " Jan 22 12:00:45 crc kubenswrapper[5120]: I0122 12:00:45.464308 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b5a6248-a718-4c8c-b2d8-26c979672691-catalog-content\") pod \"8b5a6248-a718-4c8c-b2d8-26c979672691\" (UID: \"8b5a6248-a718-4c8c-b2d8-26c979672691\") " Jan 22 12:00:45 crc kubenswrapper[5120]: I0122 12:00:45.470921 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b5a6248-a718-4c8c-b2d8-26c979672691-utilities" (OuterVolumeSpecName: "utilities") pod "8b5a6248-a718-4c8c-b2d8-26c979672691" (UID: "8b5a6248-a718-4c8c-b2d8-26c979672691"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:00:45 crc kubenswrapper[5120]: I0122 12:00:45.502704 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b5a6248-a718-4c8c-b2d8-26c979672691-kube-api-access-4wpw2" (OuterVolumeSpecName: "kube-api-access-4wpw2") pod "8b5a6248-a718-4c8c-b2d8-26c979672691" (UID: "8b5a6248-a718-4c8c-b2d8-26c979672691"). InnerVolumeSpecName "kube-api-access-4wpw2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:00:45 crc kubenswrapper[5120]: I0122 12:00:45.516337 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b5a6248-a718-4c8c-b2d8-26c979672691-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8b5a6248-a718-4c8c-b2d8-26c979672691" (UID: "8b5a6248-a718-4c8c-b2d8-26c979672691"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:00:45 crc kubenswrapper[5120]: I0122 12:00:45.566332 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4wpw2\" (UniqueName: \"kubernetes.io/projected/8b5a6248-a718-4c8c-b2d8-26c979672691-kube-api-access-4wpw2\") on node \"crc\" DevicePath \"\"" Jan 22 12:00:45 crc kubenswrapper[5120]: I0122 12:00:45.566381 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b5a6248-a718-4c8c-b2d8-26c979672691-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 12:00:45 crc kubenswrapper[5120]: I0122 12:00:45.566392 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b5a6248-a718-4c8c-b2d8-26c979672691-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 12:00:45 crc kubenswrapper[5120]: I0122 12:00:45.982348 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zkkb7" event={"ID":"8b5a6248-a718-4c8c-b2d8-26c979672691","Type":"ContainerDied","Data":"265be79110a72ebba1156eec2a58e1e49b4bd06b96371e08bd346f68e3921b3b"} Jan 22 12:00:45 crc kubenswrapper[5120]: I0122 12:00:45.982407 5120 scope.go:117] "RemoveContainer" containerID="d53b338867d5bdf729dd622fdd10987f206d0753cdfe93d718d86426f2aed315" Jan 22 12:00:45 crc kubenswrapper[5120]: I0122 12:00:45.982572 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zkkb7" Jan 22 12:00:46 crc kubenswrapper[5120]: I0122 12:00:46.003901 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zkkb7"] Jan 22 12:00:46 crc kubenswrapper[5120]: I0122 12:00:46.020807 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zkkb7"] Jan 22 12:00:47 crc kubenswrapper[5120]: I0122 12:00:47.580012 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b5a6248-a718-4c8c-b2d8-26c979672691" path="/var/lib/kubelet/pods/8b5a6248-a718-4c8c-b2d8-26c979672691/volumes" Jan 22 12:00:52 crc kubenswrapper[5120]: I0122 12:00:52.923068 5120 scope.go:117] "RemoveContainer" containerID="e8419fe5302c5032adf86949d7fc07ce99ef94c635247658e099ceb729e4276a" Jan 22 12:00:53 crc kubenswrapper[5120]: I0122 12:00:53.023711 5120 scope.go:117] "RemoveContainer" containerID="04e24cb4471e14d51fe8e02cf81f81f2adb50f52b16ddc7ba687333846cda4bb" Jan 22 12:00:53 crc kubenswrapper[5120]: I0122 12:00:53.097530 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-sd4wv"] Jan 22 12:00:53 crc kubenswrapper[5120]: W0122 12:00:53.116847 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6e8a299_2880_4236_8f8b_b6983db7ed96.slice/crio-e709f35e6f7f77fb90cf5c5fd2e2a47179c48bb5eec94b9b7dccf8754f9af022 WatchSource:0}: Error finding container e709f35e6f7f77fb90cf5c5fd2e2a47179c48bb5eec94b9b7dccf8754f9af022: Status 404 returned error can't find the container with id e709f35e6f7f77fb90cf5c5fd2e2a47179c48bb5eec94b9b7dccf8754f9af022 Jan 22 12:00:53 crc kubenswrapper[5120]: I0122 12:00:53.503010 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-796f77fbdf-t9sbr"] Jan 22 12:00:53 crc kubenswrapper[5120]: I0122 12:00:53.612719 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dxmrl"] Jan 22 12:00:53 crc kubenswrapper[5120]: W0122 12:00:53.617372 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb084ddd_669f_4358_a97d_4f3a5ba9fae7.slice/crio-2a31d739d9fbee1fe8e474a9523e8cd0a20910258f9d65e23ea080591bc7c2a6 WatchSource:0}: Error finding container 2a31d739d9fbee1fe8e474a9523e8cd0a20910258f9d65e23ea080591bc7c2a6: Status 404 returned error can't find the container with id 2a31d739d9fbee1fe8e474a9523e8cd0a20910258f9d65e23ea080591bc7c2a6 Jan 22 12:00:54 crc kubenswrapper[5120]: I0122 12:00:54.104190 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-78b9bd8798-sd4wv" event={"ID":"b6e8a299-2880-4236-8f8b-b6983db7ed96","Type":"ContainerStarted","Data":"e709f35e6f7f77fb90cf5c5fd2e2a47179c48bb5eec94b9b7dccf8754f9af022"} Jan 22 12:00:54 crc kubenswrapper[5120]: I0122 12:00:54.109304 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dxmrl" event={"ID":"cb084ddd-669f-4358-a97d-4f3a5ba9fae7","Type":"ContainerStarted","Data":"e41051f49340ebf37bce642806a3eeef2940a39175cea5236a923352e9d285d7"} Jan 22 12:00:54 crc kubenswrapper[5120]: I0122 12:00:54.109360 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dxmrl" event={"ID":"cb084ddd-669f-4358-a97d-4f3a5ba9fae7","Type":"ContainerStarted","Data":"2a31d739d9fbee1fe8e474a9523e8cd0a20910258f9d65e23ea080591bc7c2a6"} Jan 22 12:00:54 crc kubenswrapper[5120]: I0122 12:00:54.111407 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-85c68dddb-s6759" event={"ID":"da59fdd4-fe7a-4efd-b136-79a9b05d38b8","Type":"ContainerStarted","Data":"7b081cd748b64d4412cf433484aca345a3dc58b87ac614237dbf16e41e6470e6"} Jan 22 12:00:54 crc kubenswrapper[5120]: I0122 12:00:54.111734 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/observability-operator-85c68dddb-s6759" Jan 22 12:00:54 crc kubenswrapper[5120]: I0122 12:00:54.113639 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-669c9f96b5-n9lhg" event={"ID":"da376ee2-11ae-493e-9e4d-d8ac6fadfb53","Type":"ContainerStarted","Data":"5011c740aeeeefd3f87c0b199bac4428287b673354d19f67d55c9b38162fdbc7"} Jan 22 12:00:54 crc kubenswrapper[5120]: I0122 12:00:54.113881 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/perses-operator-669c9f96b5-n9lhg" Jan 22 12:00:54 crc kubenswrapper[5120]: I0122 12:00:54.116234 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f6bdccb4-kw6h7" event={"ID":"6924228f-579c-408a-8a40-b103b066446d","Type":"ContainerStarted","Data":"374f57e2a88009eca867315ec61b154a67e2811b189b1a9c604b3feae64609cf"} Jan 22 12:00:54 crc kubenswrapper[5120]: I0122 12:00:54.117366 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-796f77fbdf-t9sbr" event={"ID":"164c4d54-e519-4e1e-9e4b-3e2881312d55","Type":"ContainerStarted","Data":"ed92b3deb2dd7e7fd2d55fc582e2b90d346b992cbbaf81bb7daae2cbbd1ad89f"} Jan 22 12:00:54 crc kubenswrapper[5120]: I0122 12:00:54.122376 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87lvz" event={"ID":"5915ccea-14c1-48c1-8e09-9cc508bb150e","Type":"ContainerStarted","Data":"802a4633acd59a79744b7cd3b94900cae00c9264f92f5f9efd8117e4aad8494e"} Jan 22 12:00:54 crc kubenswrapper[5120]: I0122 12:00:54.124446 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-kjb4b" event={"ID":"6f74f225-731c-48b9-a98d-36a191b5ff41","Type":"ContainerStarted","Data":"7cc4f4cec7e980219e8b3d1caa52c63506622afefcda6781783f435ab9466227"} Jan 22 12:00:54 crc kubenswrapper[5120]: I0122 12:00:54.126898 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f6bdccb4-78zjb" event={"ID":"2e68b911-b2b1-4a04-a86f-91742f22bad9","Type":"ContainerStarted","Data":"980f5f6379f90c53764fe3ebd0806b348f27c07a0537933e88050bd05e0c2dd4"} Jan 22 12:00:54 crc kubenswrapper[5120]: I0122 12:00:54.138633 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-85c68dddb-s6759" Jan 22 12:00:54 crc kubenswrapper[5120]: I0122 12:00:54.156935 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-kjb4b" podStartSLOduration=2.993501276 podStartE2EDuration="20.156907806s" podCreationTimestamp="2026-01-22 12:00:34 +0000 UTC" firstStartedPulling="2026-01-22 12:00:35.874991726 +0000 UTC m=+770.618940067" lastFinishedPulling="2026-01-22 12:00:53.038398256 +0000 UTC m=+787.782346597" observedRunningTime="2026-01-22 12:00:54.151331349 +0000 UTC m=+788.895279690" watchObservedRunningTime="2026-01-22 12:00:54.156907806 +0000 UTC m=+788.900856147" Jan 22 12:00:54 crc kubenswrapper[5120]: I0122 12:00:54.173562 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f6bdccb4-78zjb" podStartSLOduration=2.708230173 podStartE2EDuration="20.173535333s" podCreationTimestamp="2026-01-22 12:00:34 +0000 UTC" firstStartedPulling="2026-01-22 12:00:35.573845004 +0000 UTC m=+770.317793345" lastFinishedPulling="2026-01-22 12:00:53.039150164 +0000 UTC m=+787.783098505" observedRunningTime="2026-01-22 12:00:54.169010253 +0000 UTC m=+788.912958604" watchObservedRunningTime="2026-01-22 12:00:54.173535333 +0000 UTC m=+788.917483674" Jan 22 12:00:54 crc kubenswrapper[5120]: I0122 12:00:54.198701 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f6bdccb4-kw6h7" podStartSLOduration=2.736606854 podStartE2EDuration="20.19868372s" podCreationTimestamp="2026-01-22 12:00:34 +0000 UTC" firstStartedPulling="2026-01-22 12:00:35.481155647 +0000 UTC m=+770.225103988" lastFinishedPulling="2026-01-22 12:00:52.943232513 +0000 UTC m=+787.687180854" observedRunningTime="2026-01-22 12:00:54.195553874 +0000 UTC m=+788.939502225" watchObservedRunningTime="2026-01-22 12:00:54.19868372 +0000 UTC m=+788.942632061" Jan 22 12:00:54 crc kubenswrapper[5120]: I0122 12:00:54.224211 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-669c9f96b5-n9lhg" podStartSLOduration=2.188042802 podStartE2EDuration="19.224186585s" podCreationTimestamp="2026-01-22 12:00:35 +0000 UTC" firstStartedPulling="2026-01-22 12:00:35.959865383 +0000 UTC m=+770.703813724" lastFinishedPulling="2026-01-22 12:00:52.996009166 +0000 UTC m=+787.739957507" observedRunningTime="2026-01-22 12:00:54.221746126 +0000 UTC m=+788.965694487" watchObservedRunningTime="2026-01-22 12:00:54.224186585 +0000 UTC m=+788.968134926" Jan 22 12:00:54 crc kubenswrapper[5120]: I0122 12:00:54.280756 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-85c68dddb-s6759" podStartSLOduration=3.220699738 podStartE2EDuration="20.280731162s" podCreationTimestamp="2026-01-22 12:00:34 +0000 UTC" firstStartedPulling="2026-01-22 12:00:35.87838062 +0000 UTC m=+770.622328961" lastFinishedPulling="2026-01-22 12:00:52.938412044 +0000 UTC m=+787.682360385" observedRunningTime="2026-01-22 12:00:54.250551702 +0000 UTC m=+788.994500043" watchObservedRunningTime="2026-01-22 12:00:54.280731162 +0000 UTC m=+789.024679503" Jan 22 12:00:55 crc kubenswrapper[5120]: I0122 12:00:55.138828 5120 generic.go:358] "Generic (PLEG): container finished" podID="5915ccea-14c1-48c1-8e09-9cc508bb150e" containerID="802a4633acd59a79744b7cd3b94900cae00c9264f92f5f9efd8117e4aad8494e" exitCode=0 Jan 22 12:00:55 crc kubenswrapper[5120]: I0122 12:00:55.138907 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87lvz" event={"ID":"5915ccea-14c1-48c1-8e09-9cc508bb150e","Type":"ContainerDied","Data":"802a4633acd59a79744b7cd3b94900cae00c9264f92f5f9efd8117e4aad8494e"} Jan 22 12:00:55 crc kubenswrapper[5120]: I0122 12:00:55.141405 5120 generic.go:358] "Generic (PLEG): container finished" podID="cb084ddd-669f-4358-a97d-4f3a5ba9fae7" containerID="e41051f49340ebf37bce642806a3eeef2940a39175cea5236a923352e9d285d7" exitCode=0 Jan 22 12:00:55 crc kubenswrapper[5120]: I0122 12:00:55.141506 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dxmrl" event={"ID":"cb084ddd-669f-4358-a97d-4f3a5ba9fae7","Type":"ContainerDied","Data":"e41051f49340ebf37bce642806a3eeef2940a39175cea5236a923352e9d285d7"} Jan 22 12:00:57 crc kubenswrapper[5120]: I0122 12:00:57.165685 5120 generic.go:358] "Generic (PLEG): container finished" podID="5915ccea-14c1-48c1-8e09-9cc508bb150e" containerID="4f72ccb1642fc514b9735baafbda633ffe9225e4363cd42b5b789071633690a3" exitCode=0 Jan 22 12:00:57 crc kubenswrapper[5120]: I0122 12:00:57.166155 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87lvz" event={"ID":"5915ccea-14c1-48c1-8e09-9cc508bb150e","Type":"ContainerDied","Data":"4f72ccb1642fc514b9735baafbda633ffe9225e4363cd42b5b789071633690a3"} Jan 22 12:00:57 crc kubenswrapper[5120]: I0122 12:00:57.168662 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dxmrl" event={"ID":"cb084ddd-669f-4358-a97d-4f3a5ba9fae7","Type":"ContainerStarted","Data":"176d5c6da4697db412b127de755e4488fee55bf7587afebcbe759912236afe77"} Jan 22 12:00:58 crc kubenswrapper[5120]: I0122 12:00:58.177453 5120 generic.go:358] "Generic (PLEG): container finished" podID="cb084ddd-669f-4358-a97d-4f3a5ba9fae7" containerID="176d5c6da4697db412b127de755e4488fee55bf7587afebcbe759912236afe77" exitCode=0 Jan 22 12:00:58 crc kubenswrapper[5120]: I0122 12:00:58.177511 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dxmrl" event={"ID":"cb084ddd-669f-4358-a97d-4f3a5ba9fae7","Type":"ContainerDied","Data":"176d5c6da4697db412b127de755e4488fee55bf7587afebcbe759912236afe77"} Jan 22 12:01:00 crc kubenswrapper[5120]: I0122 12:01:00.872753 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87lvz" Jan 22 12:01:00 crc kubenswrapper[5120]: I0122 12:01:00.981744 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5915ccea-14c1-48c1-8e09-9cc508bb150e-bundle\") pod \"5915ccea-14c1-48c1-8e09-9cc508bb150e\" (UID: \"5915ccea-14c1-48c1-8e09-9cc508bb150e\") " Jan 22 12:01:00 crc kubenswrapper[5120]: I0122 12:01:00.981795 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwkkg\" (UniqueName: \"kubernetes.io/projected/5915ccea-14c1-48c1-8e09-9cc508bb150e-kube-api-access-bwkkg\") pod \"5915ccea-14c1-48c1-8e09-9cc508bb150e\" (UID: \"5915ccea-14c1-48c1-8e09-9cc508bb150e\") " Jan 22 12:01:00 crc kubenswrapper[5120]: I0122 12:01:00.981822 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5915ccea-14c1-48c1-8e09-9cc508bb150e-util\") pod \"5915ccea-14c1-48c1-8e09-9cc508bb150e\" (UID: \"5915ccea-14c1-48c1-8e09-9cc508bb150e\") " Jan 22 12:01:00 crc kubenswrapper[5120]: I0122 12:01:00.984821 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5915ccea-14c1-48c1-8e09-9cc508bb150e-bundle" (OuterVolumeSpecName: "bundle") pod "5915ccea-14c1-48c1-8e09-9cc508bb150e" (UID: "5915ccea-14c1-48c1-8e09-9cc508bb150e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:01:00 crc kubenswrapper[5120]: I0122 12:01:00.990764 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5915ccea-14c1-48c1-8e09-9cc508bb150e-util" (OuterVolumeSpecName: "util") pod "5915ccea-14c1-48c1-8e09-9cc508bb150e" (UID: "5915ccea-14c1-48c1-8e09-9cc508bb150e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:01:01 crc kubenswrapper[5120]: I0122 12:01:01.001902 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5915ccea-14c1-48c1-8e09-9cc508bb150e-kube-api-access-bwkkg" (OuterVolumeSpecName: "kube-api-access-bwkkg") pod "5915ccea-14c1-48c1-8e09-9cc508bb150e" (UID: "5915ccea-14c1-48c1-8e09-9cc508bb150e"). InnerVolumeSpecName "kube-api-access-bwkkg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:01:01 crc kubenswrapper[5120]: I0122 12:01:01.083137 5120 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5915ccea-14c1-48c1-8e09-9cc508bb150e-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 12:01:01 crc kubenswrapper[5120]: I0122 12:01:01.083166 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bwkkg\" (UniqueName: \"kubernetes.io/projected/5915ccea-14c1-48c1-8e09-9cc508bb150e-kube-api-access-bwkkg\") on node \"crc\" DevicePath \"\"" Jan 22 12:01:01 crc kubenswrapper[5120]: I0122 12:01:01.083175 5120 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5915ccea-14c1-48c1-8e09-9cc508bb150e-util\") on node \"crc\" DevicePath \"\"" Jan 22 12:01:01 crc kubenswrapper[5120]: I0122 12:01:01.200356 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87lvz" Jan 22 12:01:01 crc kubenswrapper[5120]: I0122 12:01:01.200465 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87lvz" event={"ID":"5915ccea-14c1-48c1-8e09-9cc508bb150e","Type":"ContainerDied","Data":"0acb727f11a4fa06c57cb8d1ffde7d59f3b3547f9a2d5b94ff706f6704b9f81a"} Jan 22 12:01:01 crc kubenswrapper[5120]: I0122 12:01:01.200521 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0acb727f11a4fa06c57cb8d1ffde7d59f3b3547f9a2d5b94ff706f6704b9f81a" Jan 22 12:01:05 crc kubenswrapper[5120]: I0122 12:01:05.148253 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-669c9f96b5-n9lhg" Jan 22 12:01:10 crc kubenswrapper[5120]: I0122 12:01:10.647312 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-fsh62"] Jan 22 12:01:10 crc kubenswrapper[5120]: I0122 12:01:10.648222 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8b5a6248-a718-4c8c-b2d8-26c979672691" containerName="extract-utilities" Jan 22 12:01:10 crc kubenswrapper[5120]: I0122 12:01:10.648238 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b5a6248-a718-4c8c-b2d8-26c979672691" containerName="extract-utilities" Jan 22 12:01:10 crc kubenswrapper[5120]: I0122 12:01:10.648249 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8b5a6248-a718-4c8c-b2d8-26c979672691" containerName="extract-content" Jan 22 12:01:10 crc kubenswrapper[5120]: I0122 12:01:10.648256 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b5a6248-a718-4c8c-b2d8-26c979672691" containerName="extract-content" Jan 22 12:01:10 crc kubenswrapper[5120]: I0122 12:01:10.648286 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5915ccea-14c1-48c1-8e09-9cc508bb150e" containerName="pull" Jan 22 12:01:10 crc kubenswrapper[5120]: I0122 12:01:10.648292 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="5915ccea-14c1-48c1-8e09-9cc508bb150e" containerName="pull" Jan 22 12:01:10 crc kubenswrapper[5120]: I0122 12:01:10.648300 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8b5a6248-a718-4c8c-b2d8-26c979672691" containerName="registry-server" Jan 22 12:01:10 crc kubenswrapper[5120]: I0122 12:01:10.648306 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b5a6248-a718-4c8c-b2d8-26c979672691" containerName="registry-server" Jan 22 12:01:10 crc kubenswrapper[5120]: I0122 12:01:10.648331 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5915ccea-14c1-48c1-8e09-9cc508bb150e" containerName="util" Jan 22 12:01:10 crc kubenswrapper[5120]: I0122 12:01:10.648336 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="5915ccea-14c1-48c1-8e09-9cc508bb150e" containerName="util" Jan 22 12:01:10 crc kubenswrapper[5120]: I0122 12:01:10.648345 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5915ccea-14c1-48c1-8e09-9cc508bb150e" containerName="extract" Jan 22 12:01:10 crc kubenswrapper[5120]: I0122 12:01:10.648352 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="5915ccea-14c1-48c1-8e09-9cc508bb150e" containerName="extract" Jan 22 12:01:10 crc kubenswrapper[5120]: I0122 12:01:10.648457 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="8b5a6248-a718-4c8c-b2d8-26c979672691" containerName="registry-server" Jan 22 12:01:10 crc kubenswrapper[5120]: I0122 12:01:10.648470 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="5915ccea-14c1-48c1-8e09-9cc508bb150e" containerName="extract" Jan 22 12:01:12 crc kubenswrapper[5120]: I0122 12:01:12.208891 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-fsh62" Jan 22 12:01:12 crc kubenswrapper[5120]: I0122 12:01:12.211895 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager-operator\"/\"cert-manager-operator-controller-manager-dockercfg-mrl56\"" Jan 22 12:01:12 crc kubenswrapper[5120]: I0122 12:01:12.212332 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"kube-root-ca.crt\"" Jan 22 12:01:12 crc kubenswrapper[5120]: I0122 12:01:12.212846 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"openshift-service-ca.crt\"" Jan 22 12:01:12 crc kubenswrapper[5120]: I0122 12:01:12.217848 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-fsh62"] Jan 22 12:01:12 crc kubenswrapper[5120]: I0122 12:01:12.245248 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3360ac52-3ac8-4f21-9f80-e225b93f2056-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-fsh62\" (UID: \"3360ac52-3ac8-4f21-9f80-e225b93f2056\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-fsh62" Jan 22 12:01:12 crc kubenswrapper[5120]: I0122 12:01:12.245315 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bn99d\" (UniqueName: \"kubernetes.io/projected/3360ac52-3ac8-4f21-9f80-e225b93f2056-kube-api-access-bn99d\") pod \"cert-manager-operator-controller-manager-64c74584c4-fsh62\" (UID: \"3360ac52-3ac8-4f21-9f80-e225b93f2056\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-fsh62" Jan 22 12:01:12 crc kubenswrapper[5120]: I0122 12:01:12.347134 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bn99d\" (UniqueName: \"kubernetes.io/projected/3360ac52-3ac8-4f21-9f80-e225b93f2056-kube-api-access-bn99d\") pod \"cert-manager-operator-controller-manager-64c74584c4-fsh62\" (UID: \"3360ac52-3ac8-4f21-9f80-e225b93f2056\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-fsh62" Jan 22 12:01:12 crc kubenswrapper[5120]: I0122 12:01:12.347244 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3360ac52-3ac8-4f21-9f80-e225b93f2056-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-fsh62\" (UID: \"3360ac52-3ac8-4f21-9f80-e225b93f2056\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-fsh62" Jan 22 12:01:12 crc kubenswrapper[5120]: I0122 12:01:12.348199 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3360ac52-3ac8-4f21-9f80-e225b93f2056-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-fsh62\" (UID: \"3360ac52-3ac8-4f21-9f80-e225b93f2056\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-fsh62" Jan 22 12:01:12 crc kubenswrapper[5120]: I0122 12:01:12.370365 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bn99d\" (UniqueName: \"kubernetes.io/projected/3360ac52-3ac8-4f21-9f80-e225b93f2056-kube-api-access-bn99d\") pod \"cert-manager-operator-controller-manager-64c74584c4-fsh62\" (UID: \"3360ac52-3ac8-4f21-9f80-e225b93f2056\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-fsh62" Jan 22 12:01:12 crc kubenswrapper[5120]: I0122 12:01:12.568574 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-fsh62" Jan 22 12:01:15 crc kubenswrapper[5120]: W0122 12:01:15.177023 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3360ac52_3ac8_4f21_9f80_e225b93f2056.slice/crio-d80a50eabd170111200959024aae438c3f8a7ec38a34feb510c8ffd1d1be8da0 WatchSource:0}: Error finding container d80a50eabd170111200959024aae438c3f8a7ec38a34feb510c8ffd1d1be8da0: Status 404 returned error can't find the container with id d80a50eabd170111200959024aae438c3f8a7ec38a34feb510c8ffd1d1be8da0 Jan 22 12:01:15 crc kubenswrapper[5120]: I0122 12:01:15.183216 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-fsh62"] Jan 22 12:01:15 crc kubenswrapper[5120]: I0122 12:01:15.315721 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-fsh62" event={"ID":"3360ac52-3ac8-4f21-9f80-e225b93f2056","Type":"ContainerStarted","Data":"d80a50eabd170111200959024aae438c3f8a7ec38a34feb510c8ffd1d1be8da0"} Jan 22 12:01:16 crc kubenswrapper[5120]: I0122 12:01:16.323316 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-78b9bd8798-sd4wv" event={"ID":"b6e8a299-2880-4236-8f8b-b6983db7ed96","Type":"ContainerStarted","Data":"8743b47d2c9fb36616db41b8cf4a3ae9d3b694267453758a4a96e39424ada641"} Jan 22 12:01:16 crc kubenswrapper[5120]: I0122 12:01:16.327187 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dxmrl" event={"ID":"cb084ddd-669f-4358-a97d-4f3a5ba9fae7","Type":"ContainerStarted","Data":"76892c612c247112fc7609c48e5bc95f7a9684d449c15a57300915c4087b6623"} Jan 22 12:01:16 crc kubenswrapper[5120]: I0122 12:01:16.330680 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-796f77fbdf-t9sbr" event={"ID":"164c4d54-e519-4e1e-9e4b-3e2881312d55","Type":"ContainerStarted","Data":"ec197ea488202eca7ed71560b2f91de6854e07c86caeadcb8ac6716ba236310b"} Jan 22 12:01:16 crc kubenswrapper[5120]: I0122 12:01:16.348516 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/interconnect-operator-78b9bd8798-sd4wv" podStartSLOduration=12.690898919 podStartE2EDuration="34.348497481s" podCreationTimestamp="2026-01-22 12:00:42 +0000 UTC" firstStartedPulling="2026-01-22 12:00:53.119021132 +0000 UTC m=+787.862969473" lastFinishedPulling="2026-01-22 12:01:14.776619694 +0000 UTC m=+809.520568035" observedRunningTime="2026-01-22 12:01:16.341009277 +0000 UTC m=+811.084957618" watchObservedRunningTime="2026-01-22 12:01:16.348497481 +0000 UTC m=+811.092445822" Jan 22 12:01:16 crc kubenswrapper[5120]: I0122 12:01:16.377772 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dxmrl" podStartSLOduration=29.878599249 podStartE2EDuration="32.377755158s" podCreationTimestamp="2026-01-22 12:00:44 +0000 UTC" firstStartedPulling="2026-01-22 12:00:54.110438687 +0000 UTC m=+788.854387028" lastFinishedPulling="2026-01-22 12:00:56.609594596 +0000 UTC m=+791.353542937" observedRunningTime="2026-01-22 12:01:16.373708349 +0000 UTC m=+811.117656710" watchObservedRunningTime="2026-01-22 12:01:16.377755158 +0000 UTC m=+811.121703499" Jan 22 12:01:16 crc kubenswrapper[5120]: I0122 12:01:16.408487 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elastic-operator-796f77fbdf-t9sbr" podStartSLOduration=15.296854983 podStartE2EDuration="36.408462681s" podCreationTimestamp="2026-01-22 12:00:40 +0000 UTC" firstStartedPulling="2026-01-22 12:00:53.557097481 +0000 UTC m=+788.301045822" lastFinishedPulling="2026-01-22 12:01:14.668705179 +0000 UTC m=+809.412653520" observedRunningTime="2026-01-22 12:01:16.396823375 +0000 UTC m=+811.140771716" watchObservedRunningTime="2026-01-22 12:01:16.408462681 +0000 UTC m=+811.152411022" Jan 22 12:01:16 crc kubenswrapper[5120]: I0122 12:01:16.953140 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.023588 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.023839 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.031885 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-remote-ca\"" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.032139 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-transport-certs\"" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.032139 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-internal-users\"" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.032279 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-config\"" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.032387 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-dockercfg-qbcgw\"" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.032706 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-xpack-file-realm\"" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.032809 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-unicast-hosts\"" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.032909 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-http-certs-internal\"" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.033311 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-scripts\"" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.115132 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.115231 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.115263 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.115308 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.115328 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.115455 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.115486 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.115659 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.115718 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.115768 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.115816 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.115865 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.115920 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.115940 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.116002 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.217589 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.217833 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.217943 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.218111 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.218156 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.218201 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.218220 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.218155 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.218242 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.218368 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.218493 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.218537 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.218552 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.218626 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.218681 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.218683 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.218764 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.218790 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.218880 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.218906 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.219437 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.220157 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.220174 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.231023 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.231075 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.231039 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.231337 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.237258 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.237302 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.240439 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.345028 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.998123 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 22 12:01:18 crc kubenswrapper[5120]: W0122 12:01:18.028539 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6cd7adc_81ad_4b43_bd4c_7f48f1df35be.slice/crio-b5cca3810f03844884b09910051d4888d0fe8e86f8b47c72bb681e4774a48bff WatchSource:0}: Error finding container b5cca3810f03844884b09910051d4888d0fe8e86f8b47c72bb681e4774a48bff: Status 404 returned error can't find the container with id b5cca3810f03844884b09910051d4888d0fe8e86f8b47c72bb681e4774a48bff Jan 22 12:01:18 crc kubenswrapper[5120]: I0122 12:01:18.352096 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be","Type":"ContainerStarted","Data":"b5cca3810f03844884b09910051d4888d0fe8e86f8b47c72bb681e4774a48bff"} Jan 22 12:01:25 crc kubenswrapper[5120]: I0122 12:01:25.205439 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-dxmrl" Jan 22 12:01:25 crc kubenswrapper[5120]: I0122 12:01:25.205849 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dxmrl" Jan 22 12:01:25 crc kubenswrapper[5120]: I0122 12:01:25.263949 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dxmrl" Jan 22 12:01:25 crc kubenswrapper[5120]: I0122 12:01:25.397535 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-fsh62" event={"ID":"3360ac52-3ac8-4f21-9f80-e225b93f2056","Type":"ContainerStarted","Data":"f7b341664d9852f50da8e3be5edc21dfff699eef29c77efb9573fd5602f37a87"} Jan 22 12:01:25 crc kubenswrapper[5120]: I0122 12:01:25.424326 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-fsh62" podStartSLOduration=5.848324918 podStartE2EDuration="15.424308701s" podCreationTimestamp="2026-01-22 12:01:10 +0000 UTC" firstStartedPulling="2026-01-22 12:01:15.181108241 +0000 UTC m=+809.925056582" lastFinishedPulling="2026-01-22 12:01:24.757092034 +0000 UTC m=+819.501040365" observedRunningTime="2026-01-22 12:01:25.418924369 +0000 UTC m=+820.162872710" watchObservedRunningTime="2026-01-22 12:01:25.424308701 +0000 UTC m=+820.168257042" Jan 22 12:01:25 crc kubenswrapper[5120]: I0122 12:01:25.472085 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dxmrl" Jan 22 12:01:25 crc kubenswrapper[5120]: I0122 12:01:25.520172 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dxmrl"] Jan 22 12:01:27 crc kubenswrapper[5120]: I0122 12:01:27.411207 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dxmrl" podUID="cb084ddd-669f-4358-a97d-4f3a5ba9fae7" containerName="registry-server" containerID="cri-o://76892c612c247112fc7609c48e5bc95f7a9684d449c15a57300915c4087b6623" gracePeriod=2 Jan 22 12:01:28 crc kubenswrapper[5120]: I0122 12:01:28.421288 5120 generic.go:358] "Generic (PLEG): container finished" podID="cb084ddd-669f-4358-a97d-4f3a5ba9fae7" containerID="76892c612c247112fc7609c48e5bc95f7a9684d449c15a57300915c4087b6623" exitCode=0 Jan 22 12:01:28 crc kubenswrapper[5120]: I0122 12:01:28.421338 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dxmrl" event={"ID":"cb084ddd-669f-4358-a97d-4f3a5ba9fae7","Type":"ContainerDied","Data":"76892c612c247112fc7609c48e5bc95f7a9684d449c15a57300915c4087b6623"} Jan 22 12:01:29 crc kubenswrapper[5120]: I0122 12:01:29.355736 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-r299r"] Jan 22 12:01:29 crc kubenswrapper[5120]: I0122 12:01:29.704312 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-7894b5b9b4-r299r" Jan 22 12:01:29 crc kubenswrapper[5120]: I0122 12:01:29.709076 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"openshift-service-ca.crt\"" Jan 22 12:01:29 crc kubenswrapper[5120]: I0122 12:01:29.709149 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"kube-root-ca.crt\"" Jan 22 12:01:29 crc kubenswrapper[5120]: I0122 12:01:29.708997 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-webhook-dockercfg-lqldl\"" Jan 22 12:01:29 crc kubenswrapper[5120]: I0122 12:01:29.719753 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-r299r"] Jan 22 12:01:29 crc kubenswrapper[5120]: I0122 12:01:29.870765 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9khmn\" (UniqueName: \"kubernetes.io/projected/fab5bde7-2cb3-4840-955e-6eec20d29b5d-kube-api-access-9khmn\") pod \"cert-manager-webhook-7894b5b9b4-r299r\" (UID: \"fab5bde7-2cb3-4840-955e-6eec20d29b5d\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-r299r" Jan 22 12:01:29 crc kubenswrapper[5120]: I0122 12:01:29.870848 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fab5bde7-2cb3-4840-955e-6eec20d29b5d-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-r299r\" (UID: \"fab5bde7-2cb3-4840-955e-6eec20d29b5d\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-r299r" Jan 22 12:01:29 crc kubenswrapper[5120]: I0122 12:01:29.976073 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9khmn\" (UniqueName: \"kubernetes.io/projected/fab5bde7-2cb3-4840-955e-6eec20d29b5d-kube-api-access-9khmn\") pod \"cert-manager-webhook-7894b5b9b4-r299r\" (UID: \"fab5bde7-2cb3-4840-955e-6eec20d29b5d\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-r299r" Jan 22 12:01:29 crc kubenswrapper[5120]: I0122 12:01:29.976512 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fab5bde7-2cb3-4840-955e-6eec20d29b5d-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-r299r\" (UID: \"fab5bde7-2cb3-4840-955e-6eec20d29b5d\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-r299r" Jan 22 12:01:30 crc kubenswrapper[5120]: I0122 12:01:30.001093 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fab5bde7-2cb3-4840-955e-6eec20d29b5d-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-r299r\" (UID: \"fab5bde7-2cb3-4840-955e-6eec20d29b5d\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-r299r" Jan 22 12:01:30 crc kubenswrapper[5120]: I0122 12:01:30.018060 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9khmn\" (UniqueName: \"kubernetes.io/projected/fab5bde7-2cb3-4840-955e-6eec20d29b5d-kube-api-access-9khmn\") pod \"cert-manager-webhook-7894b5b9b4-r299r\" (UID: \"fab5bde7-2cb3-4840-955e-6eec20d29b5d\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-r299r" Jan 22 12:01:30 crc kubenswrapper[5120]: I0122 12:01:30.027034 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-7894b5b9b4-r299r" Jan 22 12:01:30 crc kubenswrapper[5120]: I0122 12:01:30.492144 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-qc2vc"] Jan 22 12:01:30 crc kubenswrapper[5120]: I0122 12:01:30.555581 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-qc2vc"] Jan 22 12:01:30 crc kubenswrapper[5120]: I0122 12:01:30.556130 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-qc2vc" Jan 22 12:01:30 crc kubenswrapper[5120]: I0122 12:01:30.560525 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-cainjector-dockercfg-tph25\"" Jan 22 12:01:30 crc kubenswrapper[5120]: I0122 12:01:30.585453 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/abe35b4f-1ae8-4e82-8b22-5f2d8fe01445-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-qc2vc\" (UID: \"abe35b4f-1ae8-4e82-8b22-5f2d8fe01445\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-qc2vc" Jan 22 12:01:30 crc kubenswrapper[5120]: I0122 12:01:30.585555 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrvv8\" (UniqueName: \"kubernetes.io/projected/abe35b4f-1ae8-4e82-8b22-5f2d8fe01445-kube-api-access-rrvv8\") pod \"cert-manager-cainjector-7dbf76d5c8-qc2vc\" (UID: \"abe35b4f-1ae8-4e82-8b22-5f2d8fe01445\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-qc2vc" Jan 22 12:01:30 crc kubenswrapper[5120]: I0122 12:01:30.686323 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rrvv8\" (UniqueName: \"kubernetes.io/projected/abe35b4f-1ae8-4e82-8b22-5f2d8fe01445-kube-api-access-rrvv8\") pod \"cert-manager-cainjector-7dbf76d5c8-qc2vc\" (UID: \"abe35b4f-1ae8-4e82-8b22-5f2d8fe01445\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-qc2vc" Jan 22 12:01:30 crc kubenswrapper[5120]: I0122 12:01:30.686444 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/abe35b4f-1ae8-4e82-8b22-5f2d8fe01445-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-qc2vc\" (UID: \"abe35b4f-1ae8-4e82-8b22-5f2d8fe01445\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-qc2vc" Jan 22 12:01:30 crc kubenswrapper[5120]: I0122 12:01:30.709315 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/abe35b4f-1ae8-4e82-8b22-5f2d8fe01445-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-qc2vc\" (UID: \"abe35b4f-1ae8-4e82-8b22-5f2d8fe01445\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-qc2vc" Jan 22 12:01:30 crc kubenswrapper[5120]: I0122 12:01:30.709880 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrvv8\" (UniqueName: \"kubernetes.io/projected/abe35b4f-1ae8-4e82-8b22-5f2d8fe01445-kube-api-access-rrvv8\") pod \"cert-manager-cainjector-7dbf76d5c8-qc2vc\" (UID: \"abe35b4f-1ae8-4e82-8b22-5f2d8fe01445\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-qc2vc" Jan 22 12:01:30 crc kubenswrapper[5120]: I0122 12:01:30.883216 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-qc2vc" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.563292 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.571803 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.575044 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-1-ca\"" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.575262 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-1-global-ca\"" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.575423 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-hvzlm\"" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.575616 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-1-sys-config\"" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.581190 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.640780 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a5c6b382-0699-4ddd-9be8-7031369555a5-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.640830 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a5c6b382-0699-4ddd-9be8-7031369555a5-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.640863 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/a5c6b382-0699-4ddd-9be8-7031369555a5-builder-dockercfg-hvzlm-push\") pod \"service-telemetry-operator-1-build\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.640898 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/a5c6b382-0699-4ddd-9be8-7031369555a5-builder-dockercfg-hvzlm-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.640964 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a5c6b382-0699-4ddd-9be8-7031369555a5-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.641212 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a5c6b382-0699-4ddd-9be8-7031369555a5-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.641262 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a5c6b382-0699-4ddd-9be8-7031369555a5-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.641439 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a5c6b382-0699-4ddd-9be8-7031369555a5-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.641500 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a5c6b382-0699-4ddd-9be8-7031369555a5-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.641539 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2pw6\" (UniqueName: \"kubernetes.io/projected/a5c6b382-0699-4ddd-9be8-7031369555a5-kube-api-access-r2pw6\") pod \"service-telemetry-operator-1-build\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.641698 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a5c6b382-0699-4ddd-9be8-7031369555a5-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.641727 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a5c6b382-0699-4ddd-9be8-7031369555a5-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.743096 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a5c6b382-0699-4ddd-9be8-7031369555a5-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.743174 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a5c6b382-0699-4ddd-9be8-7031369555a5-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.743223 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a5c6b382-0699-4ddd-9be8-7031369555a5-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.743287 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a5c6b382-0699-4ddd-9be8-7031369555a5-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.743315 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a5c6b382-0699-4ddd-9be8-7031369555a5-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.743348 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-r2pw6\" (UniqueName: \"kubernetes.io/projected/a5c6b382-0699-4ddd-9be8-7031369555a5-kube-api-access-r2pw6\") pod \"service-telemetry-operator-1-build\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.743375 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a5c6b382-0699-4ddd-9be8-7031369555a5-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.743405 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a5c6b382-0699-4ddd-9be8-7031369555a5-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.743443 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a5c6b382-0699-4ddd-9be8-7031369555a5-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.743466 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a5c6b382-0699-4ddd-9be8-7031369555a5-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.743493 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/a5c6b382-0699-4ddd-9be8-7031369555a5-builder-dockercfg-hvzlm-push\") pod \"service-telemetry-operator-1-build\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.743532 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/a5c6b382-0699-4ddd-9be8-7031369555a5-builder-dockercfg-hvzlm-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.744123 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a5c6b382-0699-4ddd-9be8-7031369555a5-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.744313 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a5c6b382-0699-4ddd-9be8-7031369555a5-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.744425 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a5c6b382-0699-4ddd-9be8-7031369555a5-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.744523 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a5c6b382-0699-4ddd-9be8-7031369555a5-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.744553 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a5c6b382-0699-4ddd-9be8-7031369555a5-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.744840 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a5c6b382-0699-4ddd-9be8-7031369555a5-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.744849 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a5c6b382-0699-4ddd-9be8-7031369555a5-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.745173 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a5c6b382-0699-4ddd-9be8-7031369555a5-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.745298 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a5c6b382-0699-4ddd-9be8-7031369555a5-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.749263 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/a5c6b382-0699-4ddd-9be8-7031369555a5-builder-dockercfg-hvzlm-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.750492 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/a5c6b382-0699-4ddd-9be8-7031369555a5-builder-dockercfg-hvzlm-push\") pod \"service-telemetry-operator-1-build\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.762443 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2pw6\" (UniqueName: \"kubernetes.io/projected/a5c6b382-0699-4ddd-9be8-7031369555a5-kube-api-access-r2pw6\") pod \"service-telemetry-operator-1-build\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.900950 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:01:35 crc kubenswrapper[5120]: E0122 12:01:35.399196 5120 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 76892c612c247112fc7609c48e5bc95f7a9684d449c15a57300915c4087b6623 is running failed: container process not found" containerID="76892c612c247112fc7609c48e5bc95f7a9684d449c15a57300915c4087b6623" cmd=["grpc_health_probe","-addr=:50051"] Jan 22 12:01:35 crc kubenswrapper[5120]: E0122 12:01:35.399498 5120 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 76892c612c247112fc7609c48e5bc95f7a9684d449c15a57300915c4087b6623 is running failed: container process not found" containerID="76892c612c247112fc7609c48e5bc95f7a9684d449c15a57300915c4087b6623" cmd=["grpc_health_probe","-addr=:50051"] Jan 22 12:01:35 crc kubenswrapper[5120]: E0122 12:01:35.399838 5120 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 76892c612c247112fc7609c48e5bc95f7a9684d449c15a57300915c4087b6623 is running failed: container process not found" containerID="76892c612c247112fc7609c48e5bc95f7a9684d449c15a57300915c4087b6623" cmd=["grpc_health_probe","-addr=:50051"] Jan 22 12:01:35 crc kubenswrapper[5120]: E0122 12:01:35.399874 5120 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 76892c612c247112fc7609c48e5bc95f7a9684d449c15a57300915c4087b6623 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-dxmrl" podUID="cb084ddd-669f-4358-a97d-4f3a5ba9fae7" containerName="registry-server" probeResult="unknown" Jan 22 12:01:38 crc kubenswrapper[5120]: I0122 12:01:38.723510 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dxmrl" Jan 22 12:01:38 crc kubenswrapper[5120]: I0122 12:01:38.837204 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvzbq\" (UniqueName: \"kubernetes.io/projected/cb084ddd-669f-4358-a97d-4f3a5ba9fae7-kube-api-access-xvzbq\") pod \"cb084ddd-669f-4358-a97d-4f3a5ba9fae7\" (UID: \"cb084ddd-669f-4358-a97d-4f3a5ba9fae7\") " Jan 22 12:01:38 crc kubenswrapper[5120]: I0122 12:01:38.837427 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb084ddd-669f-4358-a97d-4f3a5ba9fae7-utilities\") pod \"cb084ddd-669f-4358-a97d-4f3a5ba9fae7\" (UID: \"cb084ddd-669f-4358-a97d-4f3a5ba9fae7\") " Jan 22 12:01:38 crc kubenswrapper[5120]: I0122 12:01:38.837546 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb084ddd-669f-4358-a97d-4f3a5ba9fae7-catalog-content\") pod \"cb084ddd-669f-4358-a97d-4f3a5ba9fae7\" (UID: \"cb084ddd-669f-4358-a97d-4f3a5ba9fae7\") " Jan 22 12:01:38 crc kubenswrapper[5120]: I0122 12:01:38.838659 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb084ddd-669f-4358-a97d-4f3a5ba9fae7-utilities" (OuterVolumeSpecName: "utilities") pod "cb084ddd-669f-4358-a97d-4f3a5ba9fae7" (UID: "cb084ddd-669f-4358-a97d-4f3a5ba9fae7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:01:38 crc kubenswrapper[5120]: I0122 12:01:38.848378 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb084ddd-669f-4358-a97d-4f3a5ba9fae7-kube-api-access-xvzbq" (OuterVolumeSpecName: "kube-api-access-xvzbq") pod "cb084ddd-669f-4358-a97d-4f3a5ba9fae7" (UID: "cb084ddd-669f-4358-a97d-4f3a5ba9fae7"). InnerVolumeSpecName "kube-api-access-xvzbq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:01:38 crc kubenswrapper[5120]: I0122 12:01:38.916477 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb084ddd-669f-4358-a97d-4f3a5ba9fae7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cb084ddd-669f-4358-a97d-4f3a5ba9fae7" (UID: "cb084ddd-669f-4358-a97d-4f3a5ba9fae7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:01:38 crc kubenswrapper[5120]: I0122 12:01:38.940535 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xvzbq\" (UniqueName: \"kubernetes.io/projected/cb084ddd-669f-4358-a97d-4f3a5ba9fae7-kube-api-access-xvzbq\") on node \"crc\" DevicePath \"\"" Jan 22 12:01:38 crc kubenswrapper[5120]: I0122 12:01:38.941068 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb084ddd-669f-4358-a97d-4f3a5ba9fae7-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 12:01:38 crc kubenswrapper[5120]: I0122 12:01:38.941085 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb084ddd-669f-4358-a97d-4f3a5ba9fae7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 12:01:39 crc kubenswrapper[5120]: I0122 12:01:39.184009 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-r299r"] Jan 22 12:01:39 crc kubenswrapper[5120]: I0122 12:01:39.258765 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 22 12:01:39 crc kubenswrapper[5120]: I0122 12:01:39.293375 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-qc2vc"] Jan 22 12:01:39 crc kubenswrapper[5120]: W0122 12:01:39.350013 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfab5bde7_2cb3_4840_955e_6eec20d29b5d.slice/crio-850f5542fbbfebfa7e09ffa77a0e28f8662c633d8d7fcd44b3f68974cb19e58d WatchSource:0}: Error finding container 850f5542fbbfebfa7e09ffa77a0e28f8662c633d8d7fcd44b3f68974cb19e58d: Status 404 returned error can't find the container with id 850f5542fbbfebfa7e09ffa77a0e28f8662c633d8d7fcd44b3f68974cb19e58d Jan 22 12:01:39 crc kubenswrapper[5120]: W0122 12:01:39.353228 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podabe35b4f_1ae8_4e82_8b22_5f2d8fe01445.slice/crio-44c96f977889e9c5be77ea1116b0f83671bf498d0015e9641d891d612d23ec7f WatchSource:0}: Error finding container 44c96f977889e9c5be77ea1116b0f83671bf498d0015e9641d891d612d23ec7f: Status 404 returned error can't find the container with id 44c96f977889e9c5be77ea1116b0f83671bf498d0015e9641d891d612d23ec7f Jan 22 12:01:39 crc kubenswrapper[5120]: I0122 12:01:39.501888 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dxmrl" event={"ID":"cb084ddd-669f-4358-a97d-4f3a5ba9fae7","Type":"ContainerDied","Data":"2a31d739d9fbee1fe8e474a9523e8cd0a20910258f9d65e23ea080591bc7c2a6"} Jan 22 12:01:39 crc kubenswrapper[5120]: I0122 12:01:39.501917 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dxmrl" Jan 22 12:01:39 crc kubenswrapper[5120]: I0122 12:01:39.501967 5120 scope.go:117] "RemoveContainer" containerID="76892c612c247112fc7609c48e5bc95f7a9684d449c15a57300915c4087b6623" Jan 22 12:01:39 crc kubenswrapper[5120]: I0122 12:01:39.503478 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-qc2vc" event={"ID":"abe35b4f-1ae8-4e82-8b22-5f2d8fe01445","Type":"ContainerStarted","Data":"44c96f977889e9c5be77ea1116b0f83671bf498d0015e9641d891d612d23ec7f"} Jan 22 12:01:39 crc kubenswrapper[5120]: I0122 12:01:39.505268 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-7894b5b9b4-r299r" event={"ID":"fab5bde7-2cb3-4840-955e-6eec20d29b5d","Type":"ContainerStarted","Data":"850f5542fbbfebfa7e09ffa77a0e28f8662c633d8d7fcd44b3f68974cb19e58d"} Jan 22 12:01:39 crc kubenswrapper[5120]: I0122 12:01:39.506848 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"a5c6b382-0699-4ddd-9be8-7031369555a5","Type":"ContainerStarted","Data":"276b68da221543bdcc6460461785ccf95994beb49cd06591cb2eb132c13d5c0f"} Jan 22 12:01:39 crc kubenswrapper[5120]: I0122 12:01:39.525708 5120 scope.go:117] "RemoveContainer" containerID="176d5c6da4697db412b127de755e4488fee55bf7587afebcbe759912236afe77" Jan 22 12:01:39 crc kubenswrapper[5120]: I0122 12:01:39.540804 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dxmrl"] Jan 22 12:01:39 crc kubenswrapper[5120]: I0122 12:01:39.546897 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dxmrl"] Jan 22 12:01:39 crc kubenswrapper[5120]: I0122 12:01:39.556502 5120 scope.go:117] "RemoveContainer" containerID="e41051f49340ebf37bce642806a3eeef2940a39175cea5236a923352e9d285d7" Jan 22 12:01:39 crc kubenswrapper[5120]: I0122 12:01:39.579269 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb084ddd-669f-4358-a97d-4f3a5ba9fae7" path="/var/lib/kubelet/pods/cb084ddd-669f-4358-a97d-4f3a5ba9fae7/volumes" Jan 22 12:01:41 crc kubenswrapper[5120]: I0122 12:01:41.530610 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be","Type":"ContainerStarted","Data":"3e254b72990295cbc311f335cedd63207da051dc4de52fa375c53f3b096ee27a"} Jan 22 12:01:41 crc kubenswrapper[5120]: I0122 12:01:41.636937 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 22 12:01:41 crc kubenswrapper[5120]: I0122 12:01:41.667701 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 22 12:01:42 crc kubenswrapper[5120]: I0122 12:01:42.947239 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 22 12:01:43 crc kubenswrapper[5120]: I0122 12:01:43.548743 5120 generic.go:358] "Generic (PLEG): container finished" podID="d6cd7adc-81ad-4b43-bd4c-7f48f1df35be" containerID="3e254b72990295cbc311f335cedd63207da051dc4de52fa375c53f3b096ee27a" exitCode=0 Jan 22 12:01:43 crc kubenswrapper[5120]: I0122 12:01:43.548815 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be","Type":"ContainerDied","Data":"3e254b72990295cbc311f335cedd63207da051dc4de52fa375c53f3b096ee27a"} Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.603788 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.605097 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cb084ddd-669f-4358-a97d-4f3a5ba9fae7" containerName="extract-utilities" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.605116 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb084ddd-669f-4358-a97d-4f3a5ba9fae7" containerName="extract-utilities" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.605138 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cb084ddd-669f-4358-a97d-4f3a5ba9fae7" containerName="registry-server" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.605146 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb084ddd-669f-4358-a97d-4f3a5ba9fae7" containerName="registry-server" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.605167 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cb084ddd-669f-4358-a97d-4f3a5ba9fae7" containerName="extract-content" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.605172 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb084ddd-669f-4358-a97d-4f3a5ba9fae7" containerName="extract-content" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.605275 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="cb084ddd-669f-4358-a97d-4f3a5ba9fae7" containerName="registry-server" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.645585 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.645768 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.651379 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-2-sys-config\"" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.651607 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-2-ca\"" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.652718 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-2-global-ca\"" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.742149 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.742211 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.742399 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.742519 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.742576 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.742725 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.742780 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.742855 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.742919 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h22z\" (UniqueName: \"kubernetes.io/projected/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-kube-api-access-7h22z\") pod \"service-telemetry-operator-2-build\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.742987 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-builder-dockercfg-hvzlm-push\") pod \"service-telemetry-operator-2-build\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.743028 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.743060 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-builder-dockercfg-hvzlm-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.844521 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.844573 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-builder-dockercfg-hvzlm-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.844604 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.844635 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.844662 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.844800 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.844853 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.845121 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.845181 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.845266 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.845516 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.845545 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.845320 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.845499 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.845599 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.845646 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.845666 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7h22z\" (UniqueName: \"kubernetes.io/projected/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-kube-api-access-7h22z\") pod \"service-telemetry-operator-2-build\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.845690 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-builder-dockercfg-hvzlm-push\") pod \"service-telemetry-operator-2-build\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.846057 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.846379 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.846803 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.873858 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-builder-dockercfg-hvzlm-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.873858 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-builder-dockercfg-hvzlm-push\") pod \"service-telemetry-operator-2-build\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.877211 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7h22z\" (UniqueName: \"kubernetes.io/projected/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-kube-api-access-7h22z\") pod \"service-telemetry-operator-2-build\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.969480 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:01:47 crc kubenswrapper[5120]: I0122 12:01:47.663981 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858d87f86b-n6l95"] Jan 22 12:01:48 crc kubenswrapper[5120]: I0122 12:01:48.026822 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858d87f86b-n6l95"] Jan 22 12:01:48 crc kubenswrapper[5120]: I0122 12:01:48.026940 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858d87f86b-n6l95" Jan 22 12:01:48 crc kubenswrapper[5120]: I0122 12:01:48.029554 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-dockercfg-hsq9f\"" Jan 22 12:01:48 crc kubenswrapper[5120]: I0122 12:01:48.099247 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsb7w\" (UniqueName: \"kubernetes.io/projected/56c64e8f-cd1a-468a-a526-ed7c1ff5ac88-kube-api-access-xsb7w\") pod \"cert-manager-858d87f86b-n6l95\" (UID: \"56c64e8f-cd1a-468a-a526-ed7c1ff5ac88\") " pod="cert-manager/cert-manager-858d87f86b-n6l95" Jan 22 12:01:48 crc kubenswrapper[5120]: I0122 12:01:48.099299 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/56c64e8f-cd1a-468a-a526-ed7c1ff5ac88-bound-sa-token\") pod \"cert-manager-858d87f86b-n6l95\" (UID: \"56c64e8f-cd1a-468a-a526-ed7c1ff5ac88\") " pod="cert-manager/cert-manager-858d87f86b-n6l95" Jan 22 12:01:48 crc kubenswrapper[5120]: I0122 12:01:48.201181 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xsb7w\" (UniqueName: \"kubernetes.io/projected/56c64e8f-cd1a-468a-a526-ed7c1ff5ac88-kube-api-access-xsb7w\") pod \"cert-manager-858d87f86b-n6l95\" (UID: \"56c64e8f-cd1a-468a-a526-ed7c1ff5ac88\") " pod="cert-manager/cert-manager-858d87f86b-n6l95" Jan 22 12:01:48 crc kubenswrapper[5120]: I0122 12:01:48.201242 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/56c64e8f-cd1a-468a-a526-ed7c1ff5ac88-bound-sa-token\") pod \"cert-manager-858d87f86b-n6l95\" (UID: \"56c64e8f-cd1a-468a-a526-ed7c1ff5ac88\") " pod="cert-manager/cert-manager-858d87f86b-n6l95" Jan 22 12:01:48 crc kubenswrapper[5120]: I0122 12:01:48.229055 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsb7w\" (UniqueName: \"kubernetes.io/projected/56c64e8f-cd1a-468a-a526-ed7c1ff5ac88-kube-api-access-xsb7w\") pod \"cert-manager-858d87f86b-n6l95\" (UID: \"56c64e8f-cd1a-468a-a526-ed7c1ff5ac88\") " pod="cert-manager/cert-manager-858d87f86b-n6l95" Jan 22 12:01:48 crc kubenswrapper[5120]: I0122 12:01:48.229248 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/56c64e8f-cd1a-468a-a526-ed7c1ff5ac88-bound-sa-token\") pod \"cert-manager-858d87f86b-n6l95\" (UID: \"56c64e8f-cd1a-468a-a526-ed7c1ff5ac88\") " pod="cert-manager/cert-manager-858d87f86b-n6l95" Jan 22 12:01:48 crc kubenswrapper[5120]: I0122 12:01:48.348266 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858d87f86b-n6l95" Jan 22 12:01:55 crc kubenswrapper[5120]: I0122 12:01:55.609092 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858d87f86b-n6l95"] Jan 22 12:01:55 crc kubenswrapper[5120]: W0122 12:01:55.640915 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod56c64e8f_cd1a_468a_a526_ed7c1ff5ac88.slice/crio-7c280c2b2ad968b512f9dae71a2c587967e2269c571becfc23c871d50149cbfc WatchSource:0}: Error finding container 7c280c2b2ad968b512f9dae71a2c587967e2269c571becfc23c871d50149cbfc: Status 404 returned error can't find the container with id 7c280c2b2ad968b512f9dae71a2c587967e2269c571becfc23c871d50149cbfc Jan 22 12:01:55 crc kubenswrapper[5120]: I0122 12:01:55.652007 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-qc2vc" event={"ID":"abe35b4f-1ae8-4e82-8b22-5f2d8fe01445","Type":"ContainerStarted","Data":"50df14d0ca5a1ffda8da164da20a28b3c793f4246e15a287ebd53ec059380bea"} Jan 22 12:01:55 crc kubenswrapper[5120]: I0122 12:01:55.654001 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-7894b5b9b4-r299r" event={"ID":"fab5bde7-2cb3-4840-955e-6eec20d29b5d","Type":"ContainerStarted","Data":"cf4ac3fe13147c75b2c89468e3f61177e52092c7cf46342f1cb1806fc5d4a4e3"} Jan 22 12:01:55 crc kubenswrapper[5120]: I0122 12:01:55.657233 5120 generic.go:358] "Generic (PLEG): container finished" podID="d6cd7adc-81ad-4b43-bd4c-7f48f1df35be" containerID="b1569baafafbf7d0356bb08e52c1248e97ff42739c703c4fefa538f3ca6039d0" exitCode=0 Jan 22 12:01:55 crc kubenswrapper[5120]: I0122 12:01:55.657295 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be","Type":"ContainerDied","Data":"b1569baafafbf7d0356bb08e52c1248e97ff42739c703c4fefa538f3ca6039d0"} Jan 22 12:01:55 crc kubenswrapper[5120]: I0122 12:01:55.659762 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858d87f86b-n6l95" event={"ID":"56c64e8f-cd1a-468a-a526-ed7c1ff5ac88","Type":"ContainerStarted","Data":"7c280c2b2ad968b512f9dae71a2c587967e2269c571becfc23c871d50149cbfc"} Jan 22 12:01:55 crc kubenswrapper[5120]: I0122 12:01:55.774908 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="cert-manager/cert-manager-webhook-7894b5b9b4-r299r" Jan 22 12:01:55 crc kubenswrapper[5120]: I0122 12:01:55.822773 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-qc2vc" podStartSLOduration=10.717309491 podStartE2EDuration="25.822746758s" podCreationTimestamp="2026-01-22 12:01:30 +0000 UTC" firstStartedPulling="2026-01-22 12:01:39.35570543 +0000 UTC m=+834.099653771" lastFinishedPulling="2026-01-22 12:01:54.461142697 +0000 UTC m=+849.205091038" observedRunningTime="2026-01-22 12:01:55.801402981 +0000 UTC m=+850.545351312" watchObservedRunningTime="2026-01-22 12:01:55.822746758 +0000 UTC m=+850.566695109" Jan 22 12:01:55 crc kubenswrapper[5120]: I0122 12:01:55.852261 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-7894b5b9b4-r299r" podStartSLOduration=11.696008335 podStartE2EDuration="26.852236824s" podCreationTimestamp="2026-01-22 12:01:29 +0000 UTC" firstStartedPulling="2026-01-22 12:01:39.355672589 +0000 UTC m=+834.099620930" lastFinishedPulling="2026-01-22 12:01:54.511901078 +0000 UTC m=+849.255849419" observedRunningTime="2026-01-22 12:01:55.834022772 +0000 UTC m=+850.577971123" watchObservedRunningTime="2026-01-22 12:01:55.852236824 +0000 UTC m=+850.596185165" Jan 22 12:01:55 crc kubenswrapper[5120]: I0122 12:01:55.861063 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Jan 22 12:01:56 crc kubenswrapper[5120]: I0122 12:01:56.667263 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be","Type":"ContainerStarted","Data":"7b9979d0e55a1604640eb70e33f26342ecd95b76bfcb410ec6c253bc9cdf96bd"} Jan 22 12:01:56 crc kubenswrapper[5120]: I0122 12:01:56.668804 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:56 crc kubenswrapper[5120]: I0122 12:01:56.669916 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858d87f86b-n6l95" event={"ID":"56c64e8f-cd1a-468a-a526-ed7c1ff5ac88","Type":"ContainerStarted","Data":"77d6d2cabaaf5f9aa0d772513f2080a81bfca5d63a5dfbae28a27567093f67bb"} Jan 22 12:01:56 crc kubenswrapper[5120]: I0122 12:01:56.671247 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"a5c6b382-0699-4ddd-9be8-7031369555a5","Type":"ContainerStarted","Data":"2a5668b145354eff00e67756d9eae7ae83b1323206c24b6a9b57514b0ef3fe98"} Jan 22 12:01:56 crc kubenswrapper[5120]: I0122 12:01:56.671372 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-1-build" podUID="a5c6b382-0699-4ddd-9be8-7031369555a5" containerName="manage-dockerfile" containerID="cri-o://2a5668b145354eff00e67756d9eae7ae83b1323206c24b6a9b57514b0ef3fe98" gracePeriod=30 Jan 22 12:01:56 crc kubenswrapper[5120]: I0122 12:01:56.677864 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"22ca9e65-c1f9-472a-8795-d6806d6bf7e0","Type":"ContainerStarted","Data":"dada5f19c5248fac72087635da2dd9d46ccc13893f466778a942313931d53dca"} Jan 22 12:01:56 crc kubenswrapper[5120]: I0122 12:01:56.922807 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elasticsearch-es-default-0" podStartSLOduration=17.830049339 podStartE2EDuration="40.922791845s" podCreationTimestamp="2026-01-22 12:01:16 +0000 UTC" firstStartedPulling="2026-01-22 12:01:18.031786658 +0000 UTC m=+812.775734999" lastFinishedPulling="2026-01-22 12:01:41.124529164 +0000 UTC m=+835.868477505" observedRunningTime="2026-01-22 12:01:56.917694361 +0000 UTC m=+851.661642712" watchObservedRunningTime="2026-01-22 12:01:56.922791845 +0000 UTC m=+851.666740186" Jan 22 12:01:56 crc kubenswrapper[5120]: I0122 12:01:56.952985 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858d87f86b-n6l95" podStartSLOduration=9.952944636 podStartE2EDuration="9.952944636s" podCreationTimestamp="2026-01-22 12:01:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 12:01:56.949719478 +0000 UTC m=+851.693667819" watchObservedRunningTime="2026-01-22 12:01:56.952944636 +0000 UTC m=+851.696892977" Jan 22 12:01:57 crc kubenswrapper[5120]: I0122 12:01:57.687427 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_a5c6b382-0699-4ddd-9be8-7031369555a5/manage-dockerfile/0.log" Jan 22 12:01:57 crc kubenswrapper[5120]: I0122 12:01:57.687767 5120 generic.go:358] "Generic (PLEG): container finished" podID="a5c6b382-0699-4ddd-9be8-7031369555a5" containerID="2a5668b145354eff00e67756d9eae7ae83b1323206c24b6a9b57514b0ef3fe98" exitCode=1 Jan 22 12:01:57 crc kubenswrapper[5120]: I0122 12:01:57.687937 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"a5c6b382-0699-4ddd-9be8-7031369555a5","Type":"ContainerDied","Data":"2a5668b145354eff00e67756d9eae7ae83b1323206c24b6a9b57514b0ef3fe98"} Jan 22 12:01:57 crc kubenswrapper[5120]: I0122 12:01:57.691998 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"22ca9e65-c1f9-472a-8795-d6806d6bf7e0","Type":"ContainerStarted","Data":"41ce57f52d3737dd8a69946b5e7f98895d2d4314b12d163260db9fed3e9beb41"} Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.072336 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_a5c6b382-0699-4ddd-9be8-7031369555a5/manage-dockerfile/0.log" Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.072919 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.144376 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a5c6b382-0699-4ddd-9be8-7031369555a5-build-proxy-ca-bundles\") pod \"a5c6b382-0699-4ddd-9be8-7031369555a5\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.144833 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/a5c6b382-0699-4ddd-9be8-7031369555a5-builder-dockercfg-hvzlm-pull\") pod \"a5c6b382-0699-4ddd-9be8-7031369555a5\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.144892 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/a5c6b382-0699-4ddd-9be8-7031369555a5-builder-dockercfg-hvzlm-push\") pod \"a5c6b382-0699-4ddd-9be8-7031369555a5\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.144915 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a5c6b382-0699-4ddd-9be8-7031369555a5-buildworkdir\") pod \"a5c6b382-0699-4ddd-9be8-7031369555a5\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.144948 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a5c6b382-0699-4ddd-9be8-7031369555a5-node-pullsecrets\") pod \"a5c6b382-0699-4ddd-9be8-7031369555a5\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.144988 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2pw6\" (UniqueName: \"kubernetes.io/projected/a5c6b382-0699-4ddd-9be8-7031369555a5-kube-api-access-r2pw6\") pod \"a5c6b382-0699-4ddd-9be8-7031369555a5\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.145039 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a5c6b382-0699-4ddd-9be8-7031369555a5-buildcachedir\") pod \"a5c6b382-0699-4ddd-9be8-7031369555a5\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.145128 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a5c6b382-0699-4ddd-9be8-7031369555a5-build-ca-bundles\") pod \"a5c6b382-0699-4ddd-9be8-7031369555a5\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.145153 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a5c6b382-0699-4ddd-9be8-7031369555a5-container-storage-root\") pod \"a5c6b382-0699-4ddd-9be8-7031369555a5\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.145203 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a5c6b382-0699-4ddd-9be8-7031369555a5-container-storage-run\") pod \"a5c6b382-0699-4ddd-9be8-7031369555a5\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.145232 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a5c6b382-0699-4ddd-9be8-7031369555a5-build-blob-cache\") pod \"a5c6b382-0699-4ddd-9be8-7031369555a5\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.145285 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a5c6b382-0699-4ddd-9be8-7031369555a5-build-system-configs\") pod \"a5c6b382-0699-4ddd-9be8-7031369555a5\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.145921 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5c6b382-0699-4ddd-9be8-7031369555a5-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "a5c6b382-0699-4ddd-9be8-7031369555a5" (UID: "a5c6b382-0699-4ddd-9be8-7031369555a5"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.146023 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5c6b382-0699-4ddd-9be8-7031369555a5-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "a5c6b382-0699-4ddd-9be8-7031369555a5" (UID: "a5c6b382-0699-4ddd-9be8-7031369555a5"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.146159 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a5c6b382-0699-4ddd-9be8-7031369555a5-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "a5c6b382-0699-4ddd-9be8-7031369555a5" (UID: "a5c6b382-0699-4ddd-9be8-7031369555a5"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.146437 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5c6b382-0699-4ddd-9be8-7031369555a5-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "a5c6b382-0699-4ddd-9be8-7031369555a5" (UID: "a5c6b382-0699-4ddd-9be8-7031369555a5"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.146597 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5c6b382-0699-4ddd-9be8-7031369555a5-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "a5c6b382-0699-4ddd-9be8-7031369555a5" (UID: "a5c6b382-0699-4ddd-9be8-7031369555a5"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.146797 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a5c6b382-0699-4ddd-9be8-7031369555a5-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "a5c6b382-0699-4ddd-9be8-7031369555a5" (UID: "a5c6b382-0699-4ddd-9be8-7031369555a5"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.146930 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5c6b382-0699-4ddd-9be8-7031369555a5-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "a5c6b382-0699-4ddd-9be8-7031369555a5" (UID: "a5c6b382-0699-4ddd-9be8-7031369555a5"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.147282 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5c6b382-0699-4ddd-9be8-7031369555a5-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "a5c6b382-0699-4ddd-9be8-7031369555a5" (UID: "a5c6b382-0699-4ddd-9be8-7031369555a5"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.147627 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5c6b382-0699-4ddd-9be8-7031369555a5-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "a5c6b382-0699-4ddd-9be8-7031369555a5" (UID: "a5c6b382-0699-4ddd-9be8-7031369555a5"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.150598 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29484722-4kg69"] Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.151359 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a5c6b382-0699-4ddd-9be8-7031369555a5" containerName="manage-dockerfile" Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.151385 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5c6b382-0699-4ddd-9be8-7031369555a5" containerName="manage-dockerfile" Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.151550 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="a5c6b382-0699-4ddd-9be8-7031369555a5" containerName="manage-dockerfile" Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.167123 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484722-4kg69"] Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.167501 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484722-4kg69" Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.172541 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.172973 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g2chw\"" Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.173737 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.274284 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5c6b382-0699-4ddd-9be8-7031369555a5-kube-api-access-r2pw6" (OuterVolumeSpecName: "kube-api-access-r2pw6") pod "a5c6b382-0699-4ddd-9be8-7031369555a5" (UID: "a5c6b382-0699-4ddd-9be8-7031369555a5"). InnerVolumeSpecName "kube-api-access-r2pw6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.274336 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5c6b382-0699-4ddd-9be8-7031369555a5-builder-dockercfg-hvzlm-push" (OuterVolumeSpecName: "builder-dockercfg-hvzlm-push") pod "a5c6b382-0699-4ddd-9be8-7031369555a5" (UID: "a5c6b382-0699-4ddd-9be8-7031369555a5"). InnerVolumeSpecName "builder-dockercfg-hvzlm-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.274862 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5c6b382-0699-4ddd-9be8-7031369555a5-builder-dockercfg-hvzlm-pull" (OuterVolumeSpecName: "builder-dockercfg-hvzlm-pull") pod "a5c6b382-0699-4ddd-9be8-7031369555a5" (UID: "a5c6b382-0699-4ddd-9be8-7031369555a5"). InnerVolumeSpecName "builder-dockercfg-hvzlm-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.275162 5120 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a5c6b382-0699-4ddd-9be8-7031369555a5-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.275187 5120 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a5c6b382-0699-4ddd-9be8-7031369555a5-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.275202 5120 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a5c6b382-0699-4ddd-9be8-7031369555a5-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.275217 5120 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a5c6b382-0699-4ddd-9be8-7031369555a5-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.275229 5120 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a5c6b382-0699-4ddd-9be8-7031369555a5-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.275240 5120 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a5c6b382-0699-4ddd-9be8-7031369555a5-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.275252 5120 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a5c6b382-0699-4ddd-9be8-7031369555a5-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.275264 5120 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/a5c6b382-0699-4ddd-9be8-7031369555a5-builder-dockercfg-hvzlm-pull\") on node \"crc\" DevicePath \"\"" Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.275276 5120 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/a5c6b382-0699-4ddd-9be8-7031369555a5-builder-dockercfg-hvzlm-push\") on node \"crc\" DevicePath \"\"" Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.275288 5120 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a5c6b382-0699-4ddd-9be8-7031369555a5-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.275299 5120 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a5c6b382-0699-4ddd-9be8-7031369555a5-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.275310 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-r2pw6\" (UniqueName: \"kubernetes.io/projected/a5c6b382-0699-4ddd-9be8-7031369555a5-kube-api-access-r2pw6\") on node \"crc\" DevicePath \"\"" Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.376490 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmfp4\" (UniqueName: \"kubernetes.io/projected/724f8cf0-a6c6-45cf-932a-0bdc0247b38f-kube-api-access-xmfp4\") pod \"auto-csr-approver-29484722-4kg69\" (UID: \"724f8cf0-a6c6-45cf-932a-0bdc0247b38f\") " pod="openshift-infra/auto-csr-approver-29484722-4kg69" Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.563224 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xmfp4\" (UniqueName: \"kubernetes.io/projected/724f8cf0-a6c6-45cf-932a-0bdc0247b38f-kube-api-access-xmfp4\") pod \"auto-csr-approver-29484722-4kg69\" (UID: \"724f8cf0-a6c6-45cf-932a-0bdc0247b38f\") " pod="openshift-infra/auto-csr-approver-29484722-4kg69" Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.589395 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmfp4\" (UniqueName: \"kubernetes.io/projected/724f8cf0-a6c6-45cf-932a-0bdc0247b38f-kube-api-access-xmfp4\") pod \"auto-csr-approver-29484722-4kg69\" (UID: \"724f8cf0-a6c6-45cf-932a-0bdc0247b38f\") " pod="openshift-infra/auto-csr-approver-29484722-4kg69" Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.724801 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_a5c6b382-0699-4ddd-9be8-7031369555a5/manage-dockerfile/0.log" Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.725574 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"a5c6b382-0699-4ddd-9be8-7031369555a5","Type":"ContainerDied","Data":"276b68da221543bdcc6460461785ccf95994beb49cd06591cb2eb132c13d5c0f"} Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.725760 5120 scope.go:117] "RemoveContainer" containerID="2a5668b145354eff00e67756d9eae7ae83b1323206c24b6a9b57514b0ef3fe98" Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.726013 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.766755 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.776321 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.812131 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484722-4kg69" Jan 22 12:02:01 crc kubenswrapper[5120]: I0122 12:02:01.589422 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5c6b382-0699-4ddd-9be8-7031369555a5" path="/var/lib/kubelet/pods/a5c6b382-0699-4ddd-9be8-7031369555a5/volumes" Jan 22 12:02:01 crc kubenswrapper[5120]: I0122 12:02:01.590765 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484722-4kg69"] Jan 22 12:02:01 crc kubenswrapper[5120]: I0122 12:02:01.683266 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-7894b5b9b4-r299r" Jan 22 12:02:01 crc kubenswrapper[5120]: I0122 12:02:01.735033 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484722-4kg69" event={"ID":"724f8cf0-a6c6-45cf-932a-0bdc0247b38f","Type":"ContainerStarted","Data":"7b9e14e415736717033fd90f20e8bdea167cb5ebe3d10611764d2aa5e78197b9"} Jan 22 12:02:01 crc kubenswrapper[5120]: I0122 12:02:01.972345 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:02:01 crc kubenswrapper[5120]: I0122 12:02:01.972443 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:02:08 crc kubenswrapper[5120]: I0122 12:02:08.772278 5120 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="d6cd7adc-81ad-4b43-bd4c-7f48f1df35be" containerName="elasticsearch" probeResult="failure" output=< Jan 22 12:02:08 crc kubenswrapper[5120]: {"timestamp": "2026-01-22T12:02:08+00:00", "message": "readiness probe failed", "curl_rc": "7"} Jan 22 12:02:08 crc kubenswrapper[5120]: > Jan 22 12:02:08 crc kubenswrapper[5120]: I0122 12:02:08.801022 5120 generic.go:358] "Generic (PLEG): container finished" podID="724f8cf0-a6c6-45cf-932a-0bdc0247b38f" containerID="ceb1fb8314d94f06df7d317cf94cdc9dbae9c56f894e19873a0c9d4b5ac76d19" exitCode=0 Jan 22 12:02:08 crc kubenswrapper[5120]: I0122 12:02:08.801185 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484722-4kg69" event={"ID":"724f8cf0-a6c6-45cf-932a-0bdc0247b38f","Type":"ContainerDied","Data":"ceb1fb8314d94f06df7d317cf94cdc9dbae9c56f894e19873a0c9d4b5ac76d19"} Jan 22 12:02:08 crc kubenswrapper[5120]: I0122 12:02:08.803515 5120 generic.go:358] "Generic (PLEG): container finished" podID="22ca9e65-c1f9-472a-8795-d6806d6bf7e0" containerID="41ce57f52d3737dd8a69946b5e7f98895d2d4314b12d163260db9fed3e9beb41" exitCode=0 Jan 22 12:02:08 crc kubenswrapper[5120]: I0122 12:02:08.803616 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"22ca9e65-c1f9-472a-8795-d6806d6bf7e0","Type":"ContainerDied","Data":"41ce57f52d3737dd8a69946b5e7f98895d2d4314b12d163260db9fed3e9beb41"} Jan 22 12:02:10 crc kubenswrapper[5120]: I0122 12:02:10.119310 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484722-4kg69" Jan 22 12:02:10 crc kubenswrapper[5120]: I0122 12:02:10.242297 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmfp4\" (UniqueName: \"kubernetes.io/projected/724f8cf0-a6c6-45cf-932a-0bdc0247b38f-kube-api-access-xmfp4\") pod \"724f8cf0-a6c6-45cf-932a-0bdc0247b38f\" (UID: \"724f8cf0-a6c6-45cf-932a-0bdc0247b38f\") " Jan 22 12:02:10 crc kubenswrapper[5120]: I0122 12:02:10.253017 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/724f8cf0-a6c6-45cf-932a-0bdc0247b38f-kube-api-access-xmfp4" (OuterVolumeSpecName: "kube-api-access-xmfp4") pod "724f8cf0-a6c6-45cf-932a-0bdc0247b38f" (UID: "724f8cf0-a6c6-45cf-932a-0bdc0247b38f"). InnerVolumeSpecName "kube-api-access-xmfp4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:02:10 crc kubenswrapper[5120]: I0122 12:02:10.343786 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xmfp4\" (UniqueName: \"kubernetes.io/projected/724f8cf0-a6c6-45cf-932a-0bdc0247b38f-kube-api-access-xmfp4\") on node \"crc\" DevicePath \"\"" Jan 22 12:02:10 crc kubenswrapper[5120]: I0122 12:02:10.816780 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484722-4kg69" Jan 22 12:02:10 crc kubenswrapper[5120]: I0122 12:02:10.816800 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484722-4kg69" event={"ID":"724f8cf0-a6c6-45cf-932a-0bdc0247b38f","Type":"ContainerDied","Data":"7b9e14e415736717033fd90f20e8bdea167cb5ebe3d10611764d2aa5e78197b9"} Jan 22 12:02:10 crc kubenswrapper[5120]: I0122 12:02:10.816834 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b9e14e415736717033fd90f20e8bdea167cb5ebe3d10611764d2aa5e78197b9" Jan 22 12:02:11 crc kubenswrapper[5120]: I0122 12:02:11.195879 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29484716-phf4d"] Jan 22 12:02:11 crc kubenswrapper[5120]: I0122 12:02:11.202399 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29484716-phf4d"] Jan 22 12:02:11 crc kubenswrapper[5120]: I0122 12:02:11.582223 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a45690da-bfac-4359-88d2-e604fb44508e" path="/var/lib/kubelet/pods/a45690da-bfac-4359-88d2-e604fb44508e/volumes" Jan 22 12:02:12 crc kubenswrapper[5120]: I0122 12:02:12.831927 5120 generic.go:358] "Generic (PLEG): container finished" podID="22ca9e65-c1f9-472a-8795-d6806d6bf7e0" containerID="7eee3f73c044c06a41ca4676e52bbdefc0678bf251415bcfb5e7731f4c73e941" exitCode=0 Jan 22 12:02:12 crc kubenswrapper[5120]: I0122 12:02:12.832015 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"22ca9e65-c1f9-472a-8795-d6806d6bf7e0","Type":"ContainerDied","Data":"7eee3f73c044c06a41ca4676e52bbdefc0678bf251415bcfb5e7731f4c73e941"} Jan 22 12:02:12 crc kubenswrapper[5120]: I0122 12:02:12.878936 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_22ca9e65-c1f9-472a-8795-d6806d6bf7e0/manage-dockerfile/0.log" Jan 22 12:02:13 crc kubenswrapper[5120]: I0122 12:02:13.783048 5120 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="d6cd7adc-81ad-4b43-bd4c-7f48f1df35be" containerName="elasticsearch" probeResult="failure" output=< Jan 22 12:02:13 crc kubenswrapper[5120]: {"timestamp": "2026-01-22T12:02:13+00:00", "message": "readiness probe failed", "curl_rc": "7"} Jan 22 12:02:13 crc kubenswrapper[5120]: > Jan 22 12:02:15 crc kubenswrapper[5120]: I0122 12:02:15.862341 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"22ca9e65-c1f9-472a-8795-d6806d6bf7e0","Type":"ContainerStarted","Data":"a36f85d5fefa4980196ba8b9794328aa8a92dfc9eea7cd5f06b187392adb2de4"} Jan 22 12:02:15 crc kubenswrapper[5120]: I0122 12:02:15.903914 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-2-build" podStartSLOduration=31.903876301 podStartE2EDuration="31.903876301s" podCreationTimestamp="2026-01-22 12:01:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 12:02:15.90173583 +0000 UTC m=+870.645684171" watchObservedRunningTime="2026-01-22 12:02:15.903876301 +0000 UTC m=+870.647824642" Jan 22 12:02:18 crc kubenswrapper[5120]: I0122 12:02:18.779310 5120 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="d6cd7adc-81ad-4b43-bd4c-7f48f1df35be" containerName="elasticsearch" probeResult="failure" output=< Jan 22 12:02:18 crc kubenswrapper[5120]: {"timestamp": "2026-01-22T12:02:18+00:00", "message": "readiness probe failed", "curl_rc": "7"} Jan 22 12:02:18 crc kubenswrapper[5120]: > Jan 22 12:02:24 crc kubenswrapper[5120]: I0122 12:02:24.440036 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:02:31 crc kubenswrapper[5120]: I0122 12:02:31.972549 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:02:32 crc kubenswrapper[5120]: I0122 12:02:31.973200 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:02:45 crc kubenswrapper[5120]: I0122 12:02:45.918671 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4lzht_67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087/kube-multus/0.log" Jan 22 12:02:45 crc kubenswrapper[5120]: I0122 12:02:45.920930 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4lzht_67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087/kube-multus/0.log" Jan 22 12:02:45 crc kubenswrapper[5120]: I0122 12:02:45.934610 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 22 12:02:45 crc kubenswrapper[5120]: I0122 12:02:45.935149 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 22 12:02:54 crc kubenswrapper[5120]: I0122 12:02:54.459152 5120 scope.go:117] "RemoveContainer" containerID="50058b8b91e5dd9329c621c05d95a98bf79e0360bf7ed78ecfbcba7624fecffa" Jan 22 12:03:01 crc kubenswrapper[5120]: I0122 12:03:01.972898 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:03:01 crc kubenswrapper[5120]: I0122 12:03:01.973666 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:03:01 crc kubenswrapper[5120]: I0122 12:03:01.973732 5120 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dq269" Jan 22 12:03:01 crc kubenswrapper[5120]: I0122 12:03:01.974453 5120 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7b1b1dbcaf6053c4f4e587f597b1d0bcb38e183b1d64f8acf48abb200ec2450a"} pod="openshift-machine-config-operator/machine-config-daemon-dq269" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 12:03:01 crc kubenswrapper[5120]: I0122 12:03:01.974524 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" containerID="cri-o://7b1b1dbcaf6053c4f4e587f597b1d0bcb38e183b1d64f8acf48abb200ec2450a" gracePeriod=600 Jan 22 12:03:02 crc kubenswrapper[5120]: I0122 12:03:02.109755 5120 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 12:03:02 crc kubenswrapper[5120]: I0122 12:03:02.806397 5120 generic.go:358] "Generic (PLEG): container finished" podID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerID="7b1b1dbcaf6053c4f4e587f597b1d0bcb38e183b1d64f8acf48abb200ec2450a" exitCode=0 Jan 22 12:03:02 crc kubenswrapper[5120]: I0122 12:03:02.806473 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" event={"ID":"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9","Type":"ContainerDied","Data":"7b1b1dbcaf6053c4f4e587f597b1d0bcb38e183b1d64f8acf48abb200ec2450a"} Jan 22 12:03:02 crc kubenswrapper[5120]: I0122 12:03:02.807338 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" event={"ID":"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9","Type":"ContainerStarted","Data":"853669b192f5827170a3bbd5818f19fbda7dd2bb66abdc7a7f19541d0bf117e7"} Jan 22 12:03:02 crc kubenswrapper[5120]: I0122 12:03:02.807399 5120 scope.go:117] "RemoveContainer" containerID="bce4cc383007abddfe015e880c39e78b9257e350f68f93cf80d0801b94ef0ab7" Jan 22 12:04:00 crc kubenswrapper[5120]: I0122 12:04:00.139285 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29484724-5shbh"] Jan 22 12:04:00 crc kubenswrapper[5120]: I0122 12:04:00.143235 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="724f8cf0-a6c6-45cf-932a-0bdc0247b38f" containerName="oc" Jan 22 12:04:00 crc kubenswrapper[5120]: I0122 12:04:00.143267 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="724f8cf0-a6c6-45cf-932a-0bdc0247b38f" containerName="oc" Jan 22 12:04:00 crc kubenswrapper[5120]: I0122 12:04:00.143522 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="724f8cf0-a6c6-45cf-932a-0bdc0247b38f" containerName="oc" Jan 22 12:04:00 crc kubenswrapper[5120]: I0122 12:04:00.153833 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484724-5shbh"] Jan 22 12:04:00 crc kubenswrapper[5120]: I0122 12:04:00.154125 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484724-5shbh" Jan 22 12:04:00 crc kubenswrapper[5120]: I0122 12:04:00.156420 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 22 12:04:00 crc kubenswrapper[5120]: I0122 12:04:00.157324 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 22 12:04:00 crc kubenswrapper[5120]: I0122 12:04:00.157625 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g2chw\"" Jan 22 12:04:00 crc kubenswrapper[5120]: I0122 12:04:00.255508 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t85sk\" (UniqueName: \"kubernetes.io/projected/b86909ba-6fe2-4fdd-994d-e5014840c597-kube-api-access-t85sk\") pod \"auto-csr-approver-29484724-5shbh\" (UID: \"b86909ba-6fe2-4fdd-994d-e5014840c597\") " pod="openshift-infra/auto-csr-approver-29484724-5shbh" Jan 22 12:04:00 crc kubenswrapper[5120]: I0122 12:04:00.357147 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t85sk\" (UniqueName: \"kubernetes.io/projected/b86909ba-6fe2-4fdd-994d-e5014840c597-kube-api-access-t85sk\") pod \"auto-csr-approver-29484724-5shbh\" (UID: \"b86909ba-6fe2-4fdd-994d-e5014840c597\") " pod="openshift-infra/auto-csr-approver-29484724-5shbh" Jan 22 12:04:00 crc kubenswrapper[5120]: I0122 12:04:00.383469 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-t85sk\" (UniqueName: \"kubernetes.io/projected/b86909ba-6fe2-4fdd-994d-e5014840c597-kube-api-access-t85sk\") pod \"auto-csr-approver-29484724-5shbh\" (UID: \"b86909ba-6fe2-4fdd-994d-e5014840c597\") " pod="openshift-infra/auto-csr-approver-29484724-5shbh" Jan 22 12:04:00 crc kubenswrapper[5120]: I0122 12:04:00.532774 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484724-5shbh" Jan 22 12:04:00 crc kubenswrapper[5120]: I0122 12:04:00.732436 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484724-5shbh"] Jan 22 12:04:01 crc kubenswrapper[5120]: I0122 12:04:01.333916 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484724-5shbh" event={"ID":"b86909ba-6fe2-4fdd-994d-e5014840c597","Type":"ContainerStarted","Data":"4e020c80487390fc4c17b7c2780c4095510efeee91951a12d81dcf3bda1051d0"} Jan 22 12:04:04 crc kubenswrapper[5120]: I0122 12:04:04.359813 5120 generic.go:358] "Generic (PLEG): container finished" podID="b86909ba-6fe2-4fdd-994d-e5014840c597" containerID="ebc82e27b7ff9936fb8ab3baff996147f2e548280fc1e0007bc5efe24e9891e6" exitCode=0 Jan 22 12:04:04 crc kubenswrapper[5120]: I0122 12:04:04.359895 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484724-5shbh" event={"ID":"b86909ba-6fe2-4fdd-994d-e5014840c597","Type":"ContainerDied","Data":"ebc82e27b7ff9936fb8ab3baff996147f2e548280fc1e0007bc5efe24e9891e6"} Jan 22 12:04:05 crc kubenswrapper[5120]: I0122 12:04:05.642832 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484724-5shbh" Jan 22 12:04:05 crc kubenswrapper[5120]: I0122 12:04:05.737088 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t85sk\" (UniqueName: \"kubernetes.io/projected/b86909ba-6fe2-4fdd-994d-e5014840c597-kube-api-access-t85sk\") pod \"b86909ba-6fe2-4fdd-994d-e5014840c597\" (UID: \"b86909ba-6fe2-4fdd-994d-e5014840c597\") " Jan 22 12:04:05 crc kubenswrapper[5120]: I0122 12:04:05.745642 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b86909ba-6fe2-4fdd-994d-e5014840c597-kube-api-access-t85sk" (OuterVolumeSpecName: "kube-api-access-t85sk") pod "b86909ba-6fe2-4fdd-994d-e5014840c597" (UID: "b86909ba-6fe2-4fdd-994d-e5014840c597"). InnerVolumeSpecName "kube-api-access-t85sk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:04:05 crc kubenswrapper[5120]: I0122 12:04:05.839608 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-t85sk\" (UniqueName: \"kubernetes.io/projected/b86909ba-6fe2-4fdd-994d-e5014840c597-kube-api-access-t85sk\") on node \"crc\" DevicePath \"\"" Jan 22 12:04:06 crc kubenswrapper[5120]: I0122 12:04:06.376747 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484724-5shbh" Jan 22 12:04:06 crc kubenswrapper[5120]: I0122 12:04:06.376779 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484724-5shbh" event={"ID":"b86909ba-6fe2-4fdd-994d-e5014840c597","Type":"ContainerDied","Data":"4e020c80487390fc4c17b7c2780c4095510efeee91951a12d81dcf3bda1051d0"} Jan 22 12:04:06 crc kubenswrapper[5120]: I0122 12:04:06.377452 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e020c80487390fc4c17b7c2780c4095510efeee91951a12d81dcf3bda1051d0" Jan 22 12:04:06 crc kubenswrapper[5120]: I0122 12:04:06.738696 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29484718-tbtpd"] Jan 22 12:04:06 crc kubenswrapper[5120]: I0122 12:04:06.751172 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29484718-tbtpd"] Jan 22 12:04:07 crc kubenswrapper[5120]: I0122 12:04:07.580500 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b79a0076-aa90-4841-9865-b94aef438d2e" path="/var/lib/kubelet/pods/b79a0076-aa90-4841-9865-b94aef438d2e/volumes" Jan 22 12:04:16 crc kubenswrapper[5120]: I0122 12:04:16.453214 5120 generic.go:358] "Generic (PLEG): container finished" podID="22ca9e65-c1f9-472a-8795-d6806d6bf7e0" containerID="a36f85d5fefa4980196ba8b9794328aa8a92dfc9eea7cd5f06b187392adb2de4" exitCode=0 Jan 22 12:04:16 crc kubenswrapper[5120]: I0122 12:04:16.453302 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"22ca9e65-c1f9-472a-8795-d6806d6bf7e0","Type":"ContainerDied","Data":"a36f85d5fefa4980196ba8b9794328aa8a92dfc9eea7cd5f06b187392adb2de4"} Jan 22 12:04:17 crc kubenswrapper[5120]: I0122 12:04:17.708768 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:04:17 crc kubenswrapper[5120]: I0122 12:04:17.838534 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7h22z\" (UniqueName: \"kubernetes.io/projected/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-kube-api-access-7h22z\") pod \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " Jan 22 12:04:17 crc kubenswrapper[5120]: I0122 12:04:17.838588 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-builder-dockercfg-hvzlm-pull\") pod \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " Jan 22 12:04:17 crc kubenswrapper[5120]: I0122 12:04:17.838612 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-builder-dockercfg-hvzlm-push\") pod \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " Jan 22 12:04:17 crc kubenswrapper[5120]: I0122 12:04:17.838643 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-build-system-configs\") pod \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " Jan 22 12:04:17 crc kubenswrapper[5120]: I0122 12:04:17.838729 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-container-storage-root\") pod \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " Jan 22 12:04:17 crc kubenswrapper[5120]: I0122 12:04:17.839249 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-node-pullsecrets\") pod \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " Jan 22 12:04:17 crc kubenswrapper[5120]: I0122 12:04:17.839301 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "22ca9e65-c1f9-472a-8795-d6806d6bf7e0" (UID: "22ca9e65-c1f9-472a-8795-d6806d6bf7e0"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 12:04:17 crc kubenswrapper[5120]: I0122 12:04:17.839352 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-container-storage-run\") pod \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " Jan 22 12:04:17 crc kubenswrapper[5120]: I0122 12:04:17.839377 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-build-ca-bundles\") pod \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " Jan 22 12:04:17 crc kubenswrapper[5120]: I0122 12:04:17.839456 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-buildworkdir\") pod \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " Jan 22 12:04:17 crc kubenswrapper[5120]: I0122 12:04:17.839504 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-build-proxy-ca-bundles\") pod \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " Jan 22 12:04:17 crc kubenswrapper[5120]: I0122 12:04:17.839530 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-buildcachedir\") pod \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " Jan 22 12:04:17 crc kubenswrapper[5120]: I0122 12:04:17.839563 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-build-blob-cache\") pod \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " Jan 22 12:04:17 crc kubenswrapper[5120]: I0122 12:04:17.839656 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "22ca9e65-c1f9-472a-8795-d6806d6bf7e0" (UID: "22ca9e65-c1f9-472a-8795-d6806d6bf7e0"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:04:17 crc kubenswrapper[5120]: I0122 12:04:17.839772 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "22ca9e65-c1f9-472a-8795-d6806d6bf7e0" (UID: "22ca9e65-c1f9-472a-8795-d6806d6bf7e0"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 12:04:17 crc kubenswrapper[5120]: I0122 12:04:17.840160 5120 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 22 12:04:17 crc kubenswrapper[5120]: I0122 12:04:17.840180 5120 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 22 12:04:17 crc kubenswrapper[5120]: I0122 12:04:17.840195 5120 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 22 12:04:17 crc kubenswrapper[5120]: I0122 12:04:17.840308 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "22ca9e65-c1f9-472a-8795-d6806d6bf7e0" (UID: "22ca9e65-c1f9-472a-8795-d6806d6bf7e0"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:04:17 crc kubenswrapper[5120]: I0122 12:04:17.840612 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "22ca9e65-c1f9-472a-8795-d6806d6bf7e0" (UID: "22ca9e65-c1f9-472a-8795-d6806d6bf7e0"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:04:17 crc kubenswrapper[5120]: I0122 12:04:17.840743 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "22ca9e65-c1f9-472a-8795-d6806d6bf7e0" (UID: "22ca9e65-c1f9-472a-8795-d6806d6bf7e0"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:04:17 crc kubenswrapper[5120]: I0122 12:04:17.846696 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-builder-dockercfg-hvzlm-push" (OuterVolumeSpecName: "builder-dockercfg-hvzlm-push") pod "22ca9e65-c1f9-472a-8795-d6806d6bf7e0" (UID: "22ca9e65-c1f9-472a-8795-d6806d6bf7e0"). InnerVolumeSpecName "builder-dockercfg-hvzlm-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 12:04:17 crc kubenswrapper[5120]: I0122 12:04:17.846982 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-builder-dockercfg-hvzlm-pull" (OuterVolumeSpecName: "builder-dockercfg-hvzlm-pull") pod "22ca9e65-c1f9-472a-8795-d6806d6bf7e0" (UID: "22ca9e65-c1f9-472a-8795-d6806d6bf7e0"). InnerVolumeSpecName "builder-dockercfg-hvzlm-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 12:04:17 crc kubenswrapper[5120]: I0122 12:04:17.847223 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-kube-api-access-7h22z" (OuterVolumeSpecName: "kube-api-access-7h22z") pod "22ca9e65-c1f9-472a-8795-d6806d6bf7e0" (UID: "22ca9e65-c1f9-472a-8795-d6806d6bf7e0"). InnerVolumeSpecName "kube-api-access-7h22z". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:04:17 crc kubenswrapper[5120]: I0122 12:04:17.872730 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "22ca9e65-c1f9-472a-8795-d6806d6bf7e0" (UID: "22ca9e65-c1f9-472a-8795-d6806d6bf7e0"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:04:17 crc kubenswrapper[5120]: I0122 12:04:17.942377 5120 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 12:04:17 crc kubenswrapper[5120]: I0122 12:04:17.942416 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7h22z\" (UniqueName: \"kubernetes.io/projected/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-kube-api-access-7h22z\") on node \"crc\" DevicePath \"\"" Jan 22 12:04:17 crc kubenswrapper[5120]: I0122 12:04:17.942426 5120 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-builder-dockercfg-hvzlm-pull\") on node \"crc\" DevicePath \"\"" Jan 22 12:04:17 crc kubenswrapper[5120]: I0122 12:04:17.942436 5120 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-builder-dockercfg-hvzlm-push\") on node \"crc\" DevicePath \"\"" Jan 22 12:04:17 crc kubenswrapper[5120]: I0122 12:04:17.942444 5120 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 22 12:04:17 crc kubenswrapper[5120]: I0122 12:04:17.942456 5120 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 12:04:17 crc kubenswrapper[5120]: I0122 12:04:17.942467 5120 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 22 12:04:18 crc kubenswrapper[5120]: I0122 12:04:18.046198 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "22ca9e65-c1f9-472a-8795-d6806d6bf7e0" (UID: "22ca9e65-c1f9-472a-8795-d6806d6bf7e0"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:04:18 crc kubenswrapper[5120]: I0122 12:04:18.145152 5120 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 22 12:04:18 crc kubenswrapper[5120]: I0122 12:04:18.471589 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:04:18 crc kubenswrapper[5120]: I0122 12:04:18.471583 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"22ca9e65-c1f9-472a-8795-d6806d6bf7e0","Type":"ContainerDied","Data":"dada5f19c5248fac72087635da2dd9d46ccc13893f466778a942313931d53dca"} Jan 22 12:04:18 crc kubenswrapper[5120]: I0122 12:04:18.471636 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dada5f19c5248fac72087635da2dd9d46ccc13893f466778a942313931d53dca" Jan 22 12:04:20 crc kubenswrapper[5120]: I0122 12:04:20.729338 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "22ca9e65-c1f9-472a-8795-d6806d6bf7e0" (UID: "22ca9e65-c1f9-472a-8795-d6806d6bf7e0"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:04:20 crc kubenswrapper[5120]: I0122 12:04:20.785638 5120 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.315471 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.316780 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="22ca9e65-c1f9-472a-8795-d6806d6bf7e0" containerName="docker-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.316819 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="22ca9e65-c1f9-472a-8795-d6806d6bf7e0" containerName="docker-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.316845 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b86909ba-6fe2-4fdd-994d-e5014840c597" containerName="oc" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.316853 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="b86909ba-6fe2-4fdd-994d-e5014840c597" containerName="oc" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.316870 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="22ca9e65-c1f9-472a-8795-d6806d6bf7e0" containerName="manage-dockerfile" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.316878 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="22ca9e65-c1f9-472a-8795-d6806d6bf7e0" containerName="manage-dockerfile" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.316898 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="22ca9e65-c1f9-472a-8795-d6806d6bf7e0" containerName="git-clone" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.316906 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="22ca9e65-c1f9-472a-8795-d6806d6bf7e0" containerName="git-clone" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.317159 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="22ca9e65-c1f9-472a-8795-d6806d6bf7e0" containerName="docker-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.317181 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="b86909ba-6fe2-4fdd-994d-e5014840c597" containerName="oc" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.466601 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.466836 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.469560 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-1-global-ca\"" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.470332 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-1-sys-config\"" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.471670 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-hvzlm\"" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.474610 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-1-ca\"" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.613608 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/3528bca7-c1b4-485a-a9bd-240346daabf5-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.613724 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/3528bca7-c1b4-485a-a9bd-240346daabf5-builder-dockercfg-hvzlm-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.613801 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3528bca7-c1b4-485a-a9bd-240346daabf5-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.613895 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3528bca7-c1b4-485a-a9bd-240346daabf5-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.613941 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgbhk\" (UniqueName: \"kubernetes.io/projected/3528bca7-c1b4-485a-a9bd-240346daabf5-kube-api-access-vgbhk\") pod \"smart-gateway-operator-1-build\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.614088 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/3528bca7-c1b4-485a-a9bd-240346daabf5-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.614145 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/3528bca7-c1b4-485a-a9bd-240346daabf5-builder-dockercfg-hvzlm-push\") pod \"smart-gateway-operator-1-build\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.614216 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/3528bca7-c1b4-485a-a9bd-240346daabf5-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.614347 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/3528bca7-c1b4-485a-a9bd-240346daabf5-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.614471 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/3528bca7-c1b4-485a-a9bd-240346daabf5-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.614520 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/3528bca7-c1b4-485a-a9bd-240346daabf5-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.614577 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3528bca7-c1b4-485a-a9bd-240346daabf5-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.716534 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/3528bca7-c1b4-485a-a9bd-240346daabf5-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.716591 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/3528bca7-c1b4-485a-a9bd-240346daabf5-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.716618 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3528bca7-c1b4-485a-a9bd-240346daabf5-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.717390 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/3528bca7-c1b4-485a-a9bd-240346daabf5-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.717410 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/3528bca7-c1b4-485a-a9bd-240346daabf5-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.717478 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/3528bca7-c1b4-485a-a9bd-240346daabf5-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.717496 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3528bca7-c1b4-485a-a9bd-240346daabf5-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.717596 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/3528bca7-c1b4-485a-a9bd-240346daabf5-builder-dockercfg-hvzlm-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.717681 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3528bca7-c1b4-485a-a9bd-240346daabf5-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.717759 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3528bca7-c1b4-485a-a9bd-240346daabf5-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.717780 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vgbhk\" (UniqueName: \"kubernetes.io/projected/3528bca7-c1b4-485a-a9bd-240346daabf5-kube-api-access-vgbhk\") pod \"smart-gateway-operator-1-build\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.717797 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/3528bca7-c1b4-485a-a9bd-240346daabf5-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.717982 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/3528bca7-c1b4-485a-a9bd-240346daabf5-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.718025 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/3528bca7-c1b4-485a-a9bd-240346daabf5-builder-dockercfg-hvzlm-push\") pod \"smart-gateway-operator-1-build\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.718071 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/3528bca7-c1b4-485a-a9bd-240346daabf5-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.718145 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/3528bca7-c1b4-485a-a9bd-240346daabf5-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.718469 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3528bca7-c1b4-485a-a9bd-240346daabf5-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.718869 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/3528bca7-c1b4-485a-a9bd-240346daabf5-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.718939 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/3528bca7-c1b4-485a-a9bd-240346daabf5-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.719336 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/3528bca7-c1b4-485a-a9bd-240346daabf5-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.719702 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3528bca7-c1b4-485a-a9bd-240346daabf5-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.725721 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/3528bca7-c1b4-485a-a9bd-240346daabf5-builder-dockercfg-hvzlm-push\") pod \"smart-gateway-operator-1-build\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.726171 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/3528bca7-c1b4-485a-a9bd-240346daabf5-builder-dockercfg-hvzlm-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.739443 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgbhk\" (UniqueName: \"kubernetes.io/projected/3528bca7-c1b4-485a-a9bd-240346daabf5-kube-api-access-vgbhk\") pod \"smart-gateway-operator-1-build\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.781997 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:23 crc kubenswrapper[5120]: I0122 12:04:23.017457 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 22 12:04:23 crc kubenswrapper[5120]: I0122 12:04:23.511301 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"3528bca7-c1b4-485a-a9bd-240346daabf5","Type":"ContainerStarted","Data":"def8f1d0d1f58ef3d13c999ddc952bd268ad1c6d1be4ab666cfcde1f32d97150"} Jan 22 12:04:23 crc kubenswrapper[5120]: I0122 12:04:23.511704 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"3528bca7-c1b4-485a-a9bd-240346daabf5","Type":"ContainerStarted","Data":"3ade49d83aa5d35969943e2a6648e1a2bec8d3618c1ba25e134b6d8407a2b261"} Jan 22 12:04:24 crc kubenswrapper[5120]: I0122 12:04:24.522067 5120 generic.go:358] "Generic (PLEG): container finished" podID="3528bca7-c1b4-485a-a9bd-240346daabf5" containerID="def8f1d0d1f58ef3d13c999ddc952bd268ad1c6d1be4ab666cfcde1f32d97150" exitCode=0 Jan 22 12:04:24 crc kubenswrapper[5120]: I0122 12:04:24.522202 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"3528bca7-c1b4-485a-a9bd-240346daabf5","Type":"ContainerDied","Data":"def8f1d0d1f58ef3d13c999ddc952bd268ad1c6d1be4ab666cfcde1f32d97150"} Jan 22 12:04:26 crc kubenswrapper[5120]: I0122 12:04:26.542166 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"3528bca7-c1b4-485a-a9bd-240346daabf5","Type":"ContainerStarted","Data":"12de23dc8367ad8cb68c260a14425ae16c6c4b05ce1208c9744c48c7a3814bd0"} Jan 22 12:04:26 crc kubenswrapper[5120]: I0122 12:04:26.572127 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-1-build" podStartSLOduration=4.572102668 podStartE2EDuration="4.572102668s" podCreationTimestamp="2026-01-22 12:04:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 12:04:26.571210972 +0000 UTC m=+1001.315159363" watchObservedRunningTime="2026-01-22 12:04:26.572102668 +0000 UTC m=+1001.316051009" Jan 22 12:04:32 crc kubenswrapper[5120]: I0122 12:04:32.900980 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 22 12:04:32 crc kubenswrapper[5120]: I0122 12:04:32.901976 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/smart-gateway-operator-1-build" podUID="3528bca7-c1b4-485a-a9bd-240346daabf5" containerName="docker-build" containerID="cri-o://12de23dc8367ad8cb68c260a14425ae16c6c4b05ce1208c9744c48c7a3814bd0" gracePeriod=30 Jan 22 12:04:34 crc kubenswrapper[5120]: I0122 12:04:34.889325 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.496710 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.499311 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.504386 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-2-ca\"" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.504399 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-2-global-ca\"" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.504600 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-2-sys-config\"" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.516563 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/379c9b40-0f89-404c-ba85-6b98c4a35a4f-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.517353 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/379c9b40-0f89-404c-ba85-6b98c4a35a4f-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.517416 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/379c9b40-0f89-404c-ba85-6b98c4a35a4f-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.517465 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/379c9b40-0f89-404c-ba85-6b98c4a35a4f-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.517538 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/379c9b40-0f89-404c-ba85-6b98c4a35a4f-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.517622 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/379c9b40-0f89-404c-ba85-6b98c4a35a4f-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.517702 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/379c9b40-0f89-404c-ba85-6b98c4a35a4f-builder-dockercfg-hvzlm-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.517726 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/379c9b40-0f89-404c-ba85-6b98c4a35a4f-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.517768 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/379c9b40-0f89-404c-ba85-6b98c4a35a4f-builder-dockercfg-hvzlm-push\") pod \"smart-gateway-operator-2-build\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.517797 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/379c9b40-0f89-404c-ba85-6b98c4a35a4f-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.519625 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcjh4\" (UniqueName: \"kubernetes.io/projected/379c9b40-0f89-404c-ba85-6b98c4a35a4f-kube-api-access-lcjh4\") pod \"smart-gateway-operator-2-build\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.520844 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/379c9b40-0f89-404c-ba85-6b98c4a35a4f-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.622689 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/379c9b40-0f89-404c-ba85-6b98c4a35a4f-builder-dockercfg-hvzlm-push\") pod \"smart-gateway-operator-2-build\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.622757 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/379c9b40-0f89-404c-ba85-6b98c4a35a4f-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.623598 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lcjh4\" (UniqueName: \"kubernetes.io/projected/379c9b40-0f89-404c-ba85-6b98c4a35a4f-kube-api-access-lcjh4\") pod \"smart-gateway-operator-2-build\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.623757 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/379c9b40-0f89-404c-ba85-6b98c4a35a4f-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.623810 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/379c9b40-0f89-404c-ba85-6b98c4a35a4f-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.623910 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/379c9b40-0f89-404c-ba85-6b98c4a35a4f-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.624044 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/379c9b40-0f89-404c-ba85-6b98c4a35a4f-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.624068 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/379c9b40-0f89-404c-ba85-6b98c4a35a4f-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.624102 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/379c9b40-0f89-404c-ba85-6b98c4a35a4f-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.624155 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/379c9b40-0f89-404c-ba85-6b98c4a35a4f-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.624199 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/379c9b40-0f89-404c-ba85-6b98c4a35a4f-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.624297 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/379c9b40-0f89-404c-ba85-6b98c4a35a4f-builder-dockercfg-hvzlm-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.624327 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/379c9b40-0f89-404c-ba85-6b98c4a35a4f-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.624597 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/379c9b40-0f89-404c-ba85-6b98c4a35a4f-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.625286 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/379c9b40-0f89-404c-ba85-6b98c4a35a4f-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.625305 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/379c9b40-0f89-404c-ba85-6b98c4a35a4f-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.625745 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/379c9b40-0f89-404c-ba85-6b98c4a35a4f-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.626288 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/379c9b40-0f89-404c-ba85-6b98c4a35a4f-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.626875 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/379c9b40-0f89-404c-ba85-6b98c4a35a4f-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.627480 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/379c9b40-0f89-404c-ba85-6b98c4a35a4f-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.627568 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/379c9b40-0f89-404c-ba85-6b98c4a35a4f-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.629631 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/379c9b40-0f89-404c-ba85-6b98c4a35a4f-builder-dockercfg-hvzlm-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.629947 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/379c9b40-0f89-404c-ba85-6b98c4a35a4f-builder-dockercfg-hvzlm-push\") pod \"smart-gateway-operator-2-build\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.635747 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-1-build_3528bca7-c1b4-485a-a9bd-240346daabf5/docker-build/0.log" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.636329 5120 generic.go:358] "Generic (PLEG): container finished" podID="3528bca7-c1b4-485a-a9bd-240346daabf5" containerID="12de23dc8367ad8cb68c260a14425ae16c6c4b05ce1208c9744c48c7a3814bd0" exitCode=1 Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.636450 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"3528bca7-c1b4-485a-a9bd-240346daabf5","Type":"ContainerDied","Data":"12de23dc8367ad8cb68c260a14425ae16c6c4b05ce1208c9744c48c7a3814bd0"} Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.643828 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcjh4\" (UniqueName: \"kubernetes.io/projected/379c9b40-0f89-404c-ba85-6b98c4a35a4f-kube-api-access-lcjh4\") pod \"smart-gateway-operator-2-build\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.795517 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-1-build_3528bca7-c1b4-485a-a9bd-240346daabf5/docker-build/0.log" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.796471 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.831324 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/3528bca7-c1b4-485a-a9bd-240346daabf5-builder-dockercfg-hvzlm-pull\") pod \"3528bca7-c1b4-485a-a9bd-240346daabf5\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.831388 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/3528bca7-c1b4-485a-a9bd-240346daabf5-builder-dockercfg-hvzlm-push\") pod \"3528bca7-c1b4-485a-a9bd-240346daabf5\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.831418 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3528bca7-c1b4-485a-a9bd-240346daabf5-node-pullsecrets\") pod \"3528bca7-c1b4-485a-a9bd-240346daabf5\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.831518 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/3528bca7-c1b4-485a-a9bd-240346daabf5-buildworkdir\") pod \"3528bca7-c1b4-485a-a9bd-240346daabf5\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.831547 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgbhk\" (UniqueName: \"kubernetes.io/projected/3528bca7-c1b4-485a-a9bd-240346daabf5-kube-api-access-vgbhk\") pod \"3528bca7-c1b4-485a-a9bd-240346daabf5\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.831685 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/3528bca7-c1b4-485a-a9bd-240346daabf5-container-storage-root\") pod \"3528bca7-c1b4-485a-a9bd-240346daabf5\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.831719 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/3528bca7-c1b4-485a-a9bd-240346daabf5-build-blob-cache\") pod \"3528bca7-c1b4-485a-a9bd-240346daabf5\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.831754 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/3528bca7-c1b4-485a-a9bd-240346daabf5-build-system-configs\") pod \"3528bca7-c1b4-485a-a9bd-240346daabf5\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.831791 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3528bca7-c1b4-485a-a9bd-240346daabf5-build-proxy-ca-bundles\") pod \"3528bca7-c1b4-485a-a9bd-240346daabf5\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.831866 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3528bca7-c1b4-485a-a9bd-240346daabf5-build-ca-bundles\") pod \"3528bca7-c1b4-485a-a9bd-240346daabf5\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.831916 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/3528bca7-c1b4-485a-a9bd-240346daabf5-container-storage-run\") pod \"3528bca7-c1b4-485a-a9bd-240346daabf5\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.831975 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/3528bca7-c1b4-485a-a9bd-240346daabf5-buildcachedir\") pod \"3528bca7-c1b4-485a-a9bd-240346daabf5\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.832375 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3528bca7-c1b4-485a-a9bd-240346daabf5-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "3528bca7-c1b4-485a-a9bd-240346daabf5" (UID: "3528bca7-c1b4-485a-a9bd-240346daabf5"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.833581 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3528bca7-c1b4-485a-a9bd-240346daabf5-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "3528bca7-c1b4-485a-a9bd-240346daabf5" (UID: "3528bca7-c1b4-485a-a9bd-240346daabf5"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.833652 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3528bca7-c1b4-485a-a9bd-240346daabf5-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "3528bca7-c1b4-485a-a9bd-240346daabf5" (UID: "3528bca7-c1b4-485a-a9bd-240346daabf5"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.834256 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3528bca7-c1b4-485a-a9bd-240346daabf5-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "3528bca7-c1b4-485a-a9bd-240346daabf5" (UID: "3528bca7-c1b4-485a-a9bd-240346daabf5"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.835266 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3528bca7-c1b4-485a-a9bd-240346daabf5-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "3528bca7-c1b4-485a-a9bd-240346daabf5" (UID: "3528bca7-c1b4-485a-a9bd-240346daabf5"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.835756 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3528bca7-c1b4-485a-a9bd-240346daabf5-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "3528bca7-c1b4-485a-a9bd-240346daabf5" (UID: "3528bca7-c1b4-485a-a9bd-240346daabf5"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.836492 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3528bca7-c1b4-485a-a9bd-240346daabf5-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "3528bca7-c1b4-485a-a9bd-240346daabf5" (UID: "3528bca7-c1b4-485a-a9bd-240346daabf5"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.839797 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3528bca7-c1b4-485a-a9bd-240346daabf5-builder-dockercfg-hvzlm-pull" (OuterVolumeSpecName: "builder-dockercfg-hvzlm-pull") pod "3528bca7-c1b4-485a-a9bd-240346daabf5" (UID: "3528bca7-c1b4-485a-a9bd-240346daabf5"). InnerVolumeSpecName "builder-dockercfg-hvzlm-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.840492 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3528bca7-c1b4-485a-a9bd-240346daabf5-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "3528bca7-c1b4-485a-a9bd-240346daabf5" (UID: "3528bca7-c1b4-485a-a9bd-240346daabf5"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.842061 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3528bca7-c1b4-485a-a9bd-240346daabf5-builder-dockercfg-hvzlm-push" (OuterVolumeSpecName: "builder-dockercfg-hvzlm-push") pod "3528bca7-c1b4-485a-a9bd-240346daabf5" (UID: "3528bca7-c1b4-485a-a9bd-240346daabf5"). InnerVolumeSpecName "builder-dockercfg-hvzlm-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.842389 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3528bca7-c1b4-485a-a9bd-240346daabf5-kube-api-access-vgbhk" (OuterVolumeSpecName: "kube-api-access-vgbhk") pod "3528bca7-c1b4-485a-a9bd-240346daabf5" (UID: "3528bca7-c1b4-485a-a9bd-240346daabf5"). InnerVolumeSpecName "kube-api-access-vgbhk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.843364 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.933981 5120 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/3528bca7-c1b4-485a-a9bd-240346daabf5-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.934012 5120 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/3528bca7-c1b4-485a-a9bd-240346daabf5-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.934023 5120 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3528bca7-c1b4-485a-a9bd-240346daabf5-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.934033 5120 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3528bca7-c1b4-485a-a9bd-240346daabf5-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.934057 5120 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/3528bca7-c1b4-485a-a9bd-240346daabf5-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.934067 5120 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/3528bca7-c1b4-485a-a9bd-240346daabf5-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.934075 5120 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/3528bca7-c1b4-485a-a9bd-240346daabf5-builder-dockercfg-hvzlm-pull\") on node \"crc\" DevicePath \"\"" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.934084 5120 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/3528bca7-c1b4-485a-a9bd-240346daabf5-builder-dockercfg-hvzlm-push\") on node \"crc\" DevicePath \"\"" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.934094 5120 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3528bca7-c1b4-485a-a9bd-240346daabf5-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.934102 5120 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/3528bca7-c1b4-485a-a9bd-240346daabf5-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.934111 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vgbhk\" (UniqueName: \"kubernetes.io/projected/3528bca7-c1b4-485a-a9bd-240346daabf5-kube-api-access-vgbhk\") on node \"crc\" DevicePath \"\"" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.990408 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3528bca7-c1b4-485a-a9bd-240346daabf5-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "3528bca7-c1b4-485a-a9bd-240346daabf5" (UID: "3528bca7-c1b4-485a-a9bd-240346daabf5"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:04:38 crc kubenswrapper[5120]: I0122 12:04:38.035793 5120 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/3528bca7-c1b4-485a-a9bd-240346daabf5-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 22 12:04:38 crc kubenswrapper[5120]: I0122 12:04:38.058967 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Jan 22 12:04:38 crc kubenswrapper[5120]: W0122 12:04:38.065667 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod379c9b40_0f89_404c_ba85_6b98c4a35a4f.slice/crio-04e7e7e8a9fa2535fb0fd7a0c0568914644353d2c942358647cf4740b49c2030 WatchSource:0}: Error finding container 04e7e7e8a9fa2535fb0fd7a0c0568914644353d2c942358647cf4740b49c2030: Status 404 returned error can't find the container with id 04e7e7e8a9fa2535fb0fd7a0c0568914644353d2c942358647cf4740b49c2030 Jan 22 12:04:38 crc kubenswrapper[5120]: I0122 12:04:38.646552 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"379c9b40-0f89-404c-ba85-6b98c4a35a4f","Type":"ContainerStarted","Data":"5bbc946df07e08218832d593c225859e482f955978fd6e9a62ce7631704f808d"} Jan 22 12:04:38 crc kubenswrapper[5120]: I0122 12:04:38.647040 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"379c9b40-0f89-404c-ba85-6b98c4a35a4f","Type":"ContainerStarted","Data":"04e7e7e8a9fa2535fb0fd7a0c0568914644353d2c942358647cf4740b49c2030"} Jan 22 12:04:38 crc kubenswrapper[5120]: I0122 12:04:38.648871 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-1-build_3528bca7-c1b4-485a-a9bd-240346daabf5/docker-build/0.log" Jan 22 12:04:38 crc kubenswrapper[5120]: I0122 12:04:38.649929 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"3528bca7-c1b4-485a-a9bd-240346daabf5","Type":"ContainerDied","Data":"3ade49d83aa5d35969943e2a6648e1a2bec8d3618c1ba25e134b6d8407a2b261"} Jan 22 12:04:38 crc kubenswrapper[5120]: I0122 12:04:38.650017 5120 scope.go:117] "RemoveContainer" containerID="12de23dc8367ad8cb68c260a14425ae16c6c4b05ce1208c9744c48c7a3814bd0" Jan 22 12:04:38 crc kubenswrapper[5120]: I0122 12:04:38.650023 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:38 crc kubenswrapper[5120]: I0122 12:04:38.734089 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 22 12:04:38 crc kubenswrapper[5120]: I0122 12:04:38.734216 5120 scope.go:117] "RemoveContainer" containerID="def8f1d0d1f58ef3d13c999ddc952bd268ad1c6d1be4ab666cfcde1f32d97150" Jan 22 12:04:38 crc kubenswrapper[5120]: I0122 12:04:38.741982 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 22 12:04:39 crc kubenswrapper[5120]: I0122 12:04:39.583280 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3528bca7-c1b4-485a-a9bd-240346daabf5" path="/var/lib/kubelet/pods/3528bca7-c1b4-485a-a9bd-240346daabf5/volumes" Jan 22 12:04:39 crc kubenswrapper[5120]: I0122 12:04:39.659817 5120 generic.go:358] "Generic (PLEG): container finished" podID="379c9b40-0f89-404c-ba85-6b98c4a35a4f" containerID="5bbc946df07e08218832d593c225859e482f955978fd6e9a62ce7631704f808d" exitCode=0 Jan 22 12:04:39 crc kubenswrapper[5120]: I0122 12:04:39.659889 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"379c9b40-0f89-404c-ba85-6b98c4a35a4f","Type":"ContainerDied","Data":"5bbc946df07e08218832d593c225859e482f955978fd6e9a62ce7631704f808d"} Jan 22 12:04:40 crc kubenswrapper[5120]: I0122 12:04:40.670448 5120 generic.go:358] "Generic (PLEG): container finished" podID="379c9b40-0f89-404c-ba85-6b98c4a35a4f" containerID="cd887475f11acaa15c3251476f1ae3e6666ac309a6334a7d739d7beadfd34df8" exitCode=0 Jan 22 12:04:40 crc kubenswrapper[5120]: I0122 12:04:40.670518 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"379c9b40-0f89-404c-ba85-6b98c4a35a4f","Type":"ContainerDied","Data":"cd887475f11acaa15c3251476f1ae3e6666ac309a6334a7d739d7beadfd34df8"} Jan 22 12:04:40 crc kubenswrapper[5120]: I0122 12:04:40.705120 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-2-build_379c9b40-0f89-404c-ba85-6b98c4a35a4f/manage-dockerfile/0.log" Jan 22 12:04:41 crc kubenswrapper[5120]: I0122 12:04:41.687234 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"379c9b40-0f89-404c-ba85-6b98c4a35a4f","Type":"ContainerStarted","Data":"b77ac776efac06fa0bd34abbb085e087408bdfdddc3f45473edcc558ebcb87c7"} Jan 22 12:04:41 crc kubenswrapper[5120]: I0122 12:04:41.718243 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-2-build" podStartSLOduration=7.718223348 podStartE2EDuration="7.718223348s" podCreationTimestamp="2026-01-22 12:04:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 12:04:41.713328155 +0000 UTC m=+1016.457276486" watchObservedRunningTime="2026-01-22 12:04:41.718223348 +0000 UTC m=+1016.462171689" Jan 22 12:04:54 crc kubenswrapper[5120]: I0122 12:04:54.604783 5120 scope.go:117] "RemoveContainer" containerID="48535da82209ba80a74337bfe4adf5c3fb5d1066acf6b74856b7a35e8ae721fa" Jan 22 12:05:31 crc kubenswrapper[5120]: I0122 12:05:31.972689 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:05:31 crc kubenswrapper[5120]: I0122 12:05:31.973751 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:05:53 crc kubenswrapper[5120]: I0122 12:05:53.258211 5120 generic.go:358] "Generic (PLEG): container finished" podID="379c9b40-0f89-404c-ba85-6b98c4a35a4f" containerID="b77ac776efac06fa0bd34abbb085e087408bdfdddc3f45473edcc558ebcb87c7" exitCode=0 Jan 22 12:05:53 crc kubenswrapper[5120]: I0122 12:05:53.258309 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"379c9b40-0f89-404c-ba85-6b98c4a35a4f","Type":"ContainerDied","Data":"b77ac776efac06fa0bd34abbb085e087408bdfdddc3f45473edcc558ebcb87c7"} Jan 22 12:05:54 crc kubenswrapper[5120]: I0122 12:05:54.605878 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:05:54 crc kubenswrapper[5120]: I0122 12:05:54.669021 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lcjh4\" (UniqueName: \"kubernetes.io/projected/379c9b40-0f89-404c-ba85-6b98c4a35a4f-kube-api-access-lcjh4\") pod \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " Jan 22 12:05:54 crc kubenswrapper[5120]: I0122 12:05:54.669063 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/379c9b40-0f89-404c-ba85-6b98c4a35a4f-build-proxy-ca-bundles\") pod \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " Jan 22 12:05:54 crc kubenswrapper[5120]: I0122 12:05:54.669100 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/379c9b40-0f89-404c-ba85-6b98c4a35a4f-node-pullsecrets\") pod \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " Jan 22 12:05:54 crc kubenswrapper[5120]: I0122 12:05:54.669139 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/379c9b40-0f89-404c-ba85-6b98c4a35a4f-build-ca-bundles\") pod \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " Jan 22 12:05:54 crc kubenswrapper[5120]: I0122 12:05:54.669162 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/379c9b40-0f89-404c-ba85-6b98c4a35a4f-buildcachedir\") pod \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " Jan 22 12:05:54 crc kubenswrapper[5120]: I0122 12:05:54.669212 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/379c9b40-0f89-404c-ba85-6b98c4a35a4f-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "379c9b40-0f89-404c-ba85-6b98c4a35a4f" (UID: "379c9b40-0f89-404c-ba85-6b98c4a35a4f"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 12:05:54 crc kubenswrapper[5120]: I0122 12:05:54.669708 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/379c9b40-0f89-404c-ba85-6b98c4a35a4f-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "379c9b40-0f89-404c-ba85-6b98c4a35a4f" (UID: "379c9b40-0f89-404c-ba85-6b98c4a35a4f"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 12:05:54 crc kubenswrapper[5120]: I0122 12:05:54.669742 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/379c9b40-0f89-404c-ba85-6b98c4a35a4f-build-system-configs\") pod \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " Jan 22 12:05:54 crc kubenswrapper[5120]: I0122 12:05:54.669883 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/379c9b40-0f89-404c-ba85-6b98c4a35a4f-build-blob-cache\") pod \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " Jan 22 12:05:54 crc kubenswrapper[5120]: I0122 12:05:54.669938 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/379c9b40-0f89-404c-ba85-6b98c4a35a4f-buildworkdir\") pod \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " Jan 22 12:05:54 crc kubenswrapper[5120]: I0122 12:05:54.669985 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/379c9b40-0f89-404c-ba85-6b98c4a35a4f-container-storage-root\") pod \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " Jan 22 12:05:54 crc kubenswrapper[5120]: I0122 12:05:54.670014 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/379c9b40-0f89-404c-ba85-6b98c4a35a4f-builder-dockercfg-hvzlm-push\") pod \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " Jan 22 12:05:54 crc kubenswrapper[5120]: I0122 12:05:54.670042 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/379c9b40-0f89-404c-ba85-6b98c4a35a4f-builder-dockercfg-hvzlm-pull\") pod \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " Jan 22 12:05:54 crc kubenswrapper[5120]: I0122 12:05:54.670098 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/379c9b40-0f89-404c-ba85-6b98c4a35a4f-container-storage-run\") pod \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " Jan 22 12:05:54 crc kubenswrapper[5120]: I0122 12:05:54.670347 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/379c9b40-0f89-404c-ba85-6b98c4a35a4f-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "379c9b40-0f89-404c-ba85-6b98c4a35a4f" (UID: "379c9b40-0f89-404c-ba85-6b98c4a35a4f"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:05:54 crc kubenswrapper[5120]: I0122 12:05:54.670465 5120 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/379c9b40-0f89-404c-ba85-6b98c4a35a4f-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 22 12:05:54 crc kubenswrapper[5120]: I0122 12:05:54.670485 5120 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/379c9b40-0f89-404c-ba85-6b98c4a35a4f-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 12:05:54 crc kubenswrapper[5120]: I0122 12:05:54.670495 5120 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/379c9b40-0f89-404c-ba85-6b98c4a35a4f-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 22 12:05:54 crc kubenswrapper[5120]: I0122 12:05:54.672525 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/379c9b40-0f89-404c-ba85-6b98c4a35a4f-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "379c9b40-0f89-404c-ba85-6b98c4a35a4f" (UID: "379c9b40-0f89-404c-ba85-6b98c4a35a4f"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:05:54 crc kubenswrapper[5120]: I0122 12:05:54.672584 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/379c9b40-0f89-404c-ba85-6b98c4a35a4f-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "379c9b40-0f89-404c-ba85-6b98c4a35a4f" (UID: "379c9b40-0f89-404c-ba85-6b98c4a35a4f"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:05:54 crc kubenswrapper[5120]: I0122 12:05:54.672603 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/379c9b40-0f89-404c-ba85-6b98c4a35a4f-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "379c9b40-0f89-404c-ba85-6b98c4a35a4f" (UID: "379c9b40-0f89-404c-ba85-6b98c4a35a4f"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:05:54 crc kubenswrapper[5120]: I0122 12:05:54.677234 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/379c9b40-0f89-404c-ba85-6b98c4a35a4f-builder-dockercfg-hvzlm-pull" (OuterVolumeSpecName: "builder-dockercfg-hvzlm-pull") pod "379c9b40-0f89-404c-ba85-6b98c4a35a4f" (UID: "379c9b40-0f89-404c-ba85-6b98c4a35a4f"). InnerVolumeSpecName "builder-dockercfg-hvzlm-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 12:05:54 crc kubenswrapper[5120]: I0122 12:05:54.677337 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/379c9b40-0f89-404c-ba85-6b98c4a35a4f-builder-dockercfg-hvzlm-push" (OuterVolumeSpecName: "builder-dockercfg-hvzlm-push") pod "379c9b40-0f89-404c-ba85-6b98c4a35a4f" (UID: "379c9b40-0f89-404c-ba85-6b98c4a35a4f"). InnerVolumeSpecName "builder-dockercfg-hvzlm-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 12:05:54 crc kubenswrapper[5120]: I0122 12:05:54.677494 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/379c9b40-0f89-404c-ba85-6b98c4a35a4f-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "379c9b40-0f89-404c-ba85-6b98c4a35a4f" (UID: "379c9b40-0f89-404c-ba85-6b98c4a35a4f"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:05:54 crc kubenswrapper[5120]: I0122 12:05:54.677544 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/379c9b40-0f89-404c-ba85-6b98c4a35a4f-kube-api-access-lcjh4" (OuterVolumeSpecName: "kube-api-access-lcjh4") pod "379c9b40-0f89-404c-ba85-6b98c4a35a4f" (UID: "379c9b40-0f89-404c-ba85-6b98c4a35a4f"). InnerVolumeSpecName "kube-api-access-lcjh4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:05:54 crc kubenswrapper[5120]: I0122 12:05:54.771554 5120 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/379c9b40-0f89-404c-ba85-6b98c4a35a4f-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 22 12:05:54 crc kubenswrapper[5120]: I0122 12:05:54.771588 5120 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/379c9b40-0f89-404c-ba85-6b98c4a35a4f-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 22 12:05:54 crc kubenswrapper[5120]: I0122 12:05:54.771600 5120 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/379c9b40-0f89-404c-ba85-6b98c4a35a4f-builder-dockercfg-hvzlm-push\") on node \"crc\" DevicePath \"\"" Jan 22 12:05:54 crc kubenswrapper[5120]: I0122 12:05:54.771610 5120 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/379c9b40-0f89-404c-ba85-6b98c4a35a4f-builder-dockercfg-hvzlm-pull\") on node \"crc\" DevicePath \"\"" Jan 22 12:05:54 crc kubenswrapper[5120]: I0122 12:05:54.771619 5120 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/379c9b40-0f89-404c-ba85-6b98c4a35a4f-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 22 12:05:54 crc kubenswrapper[5120]: I0122 12:05:54.771628 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lcjh4\" (UniqueName: \"kubernetes.io/projected/379c9b40-0f89-404c-ba85-6b98c4a35a4f-kube-api-access-lcjh4\") on node \"crc\" DevicePath \"\"" Jan 22 12:05:54 crc kubenswrapper[5120]: I0122 12:05:54.771638 5120 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/379c9b40-0f89-404c-ba85-6b98c4a35a4f-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 12:05:54 crc kubenswrapper[5120]: I0122 12:05:54.893920 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/379c9b40-0f89-404c-ba85-6b98c4a35a4f-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "379c9b40-0f89-404c-ba85-6b98c4a35a4f" (UID: "379c9b40-0f89-404c-ba85-6b98c4a35a4f"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:05:54 crc kubenswrapper[5120]: I0122 12:05:54.975627 5120 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/379c9b40-0f89-404c-ba85-6b98c4a35a4f-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 22 12:05:55 crc kubenswrapper[5120]: I0122 12:05:55.278307 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"379c9b40-0f89-404c-ba85-6b98c4a35a4f","Type":"ContainerDied","Data":"04e7e7e8a9fa2535fb0fd7a0c0568914644353d2c942358647cf4740b49c2030"} Jan 22 12:05:55 crc kubenswrapper[5120]: I0122 12:05:55.278421 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04e7e7e8a9fa2535fb0fd7a0c0568914644353d2c942358647cf4740b49c2030" Jan 22 12:05:55 crc kubenswrapper[5120]: I0122 12:05:55.278332 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:05:56 crc kubenswrapper[5120]: I0122 12:05:56.695251 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/379c9b40-0f89-404c-ba85-6b98c4a35a4f-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "379c9b40-0f89-404c-ba85-6b98c4a35a4f" (UID: "379c9b40-0f89-404c-ba85-6b98c4a35a4f"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:05:56 crc kubenswrapper[5120]: I0122 12:05:56.698811 5120 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/379c9b40-0f89-404c-ba85-6b98c4a35a4f-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 22 12:05:59 crc kubenswrapper[5120]: I0122 12:05:59.948423 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 22 12:05:59 crc kubenswrapper[5120]: I0122 12:05:59.949909 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="379c9b40-0f89-404c-ba85-6b98c4a35a4f" containerName="manage-dockerfile" Jan 22 12:05:59 crc kubenswrapper[5120]: I0122 12:05:59.949930 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="379c9b40-0f89-404c-ba85-6b98c4a35a4f" containerName="manage-dockerfile" Jan 22 12:05:59 crc kubenswrapper[5120]: I0122 12:05:59.949972 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3528bca7-c1b4-485a-a9bd-240346daabf5" containerName="docker-build" Jan 22 12:05:59 crc kubenswrapper[5120]: I0122 12:05:59.949981 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="3528bca7-c1b4-485a-a9bd-240346daabf5" containerName="docker-build" Jan 22 12:05:59 crc kubenswrapper[5120]: I0122 12:05:59.949997 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3528bca7-c1b4-485a-a9bd-240346daabf5" containerName="manage-dockerfile" Jan 22 12:05:59 crc kubenswrapper[5120]: I0122 12:05:59.950004 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="3528bca7-c1b4-485a-a9bd-240346daabf5" containerName="manage-dockerfile" Jan 22 12:05:59 crc kubenswrapper[5120]: I0122 12:05:59.950018 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="379c9b40-0f89-404c-ba85-6b98c4a35a4f" containerName="git-clone" Jan 22 12:05:59 crc kubenswrapper[5120]: I0122 12:05:59.950025 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="379c9b40-0f89-404c-ba85-6b98c4a35a4f" containerName="git-clone" Jan 22 12:05:59 crc kubenswrapper[5120]: I0122 12:05:59.950041 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="379c9b40-0f89-404c-ba85-6b98c4a35a4f" containerName="docker-build" Jan 22 12:05:59 crc kubenswrapper[5120]: I0122 12:05:59.950049 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="379c9b40-0f89-404c-ba85-6b98c4a35a4f" containerName="docker-build" Jan 22 12:05:59 crc kubenswrapper[5120]: I0122 12:05:59.950180 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="379c9b40-0f89-404c-ba85-6b98c4a35a4f" containerName="docker-build" Jan 22 12:05:59 crc kubenswrapper[5120]: I0122 12:05:59.950198 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="3528bca7-c1b4-485a-a9bd-240346daabf5" containerName="docker-build" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.105277 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.105436 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.107574 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-core-1-sys-config\"" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.108816 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-core-1-global-ca\"" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.109153 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-core-1-ca\"" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.109456 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-hvzlm\"" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.144275 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29484726-c8lz2"] Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.148530 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484726-c8lz2" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.149002 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484726-c8lz2"] Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.151753 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g2chw\"" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.152034 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.152194 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.157072 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a36cb230-54e1-4799-a4a6-9009eaba532c-container-storage-root\") pod \"sg-core-1-build\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " pod="service-telemetry/sg-core-1-build" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.157110 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/a36cb230-54e1-4799-a4a6-9009eaba532c-builder-dockercfg-hvzlm-push\") pod \"sg-core-1-build\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " pod="service-telemetry/sg-core-1-build" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.157145 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a36cb230-54e1-4799-a4a6-9009eaba532c-buildcachedir\") pod \"sg-core-1-build\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " pod="service-telemetry/sg-core-1-build" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.157346 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a36cb230-54e1-4799-a4a6-9009eaba532c-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " pod="service-telemetry/sg-core-1-build" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.157478 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a36cb230-54e1-4799-a4a6-9009eaba532c-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " pod="service-telemetry/sg-core-1-build" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.157674 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a36cb230-54e1-4799-a4a6-9009eaba532c-container-storage-run\") pod \"sg-core-1-build\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " pod="service-telemetry/sg-core-1-build" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.157821 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a36cb230-54e1-4799-a4a6-9009eaba532c-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " pod="service-telemetry/sg-core-1-build" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.157879 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4tfv\" (UniqueName: \"kubernetes.io/projected/a36cb230-54e1-4799-a4a6-9009eaba532c-kube-api-access-x4tfv\") pod \"sg-core-1-build\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " pod="service-telemetry/sg-core-1-build" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.157906 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/a36cb230-54e1-4799-a4a6-9009eaba532c-builder-dockercfg-hvzlm-pull\") pod \"sg-core-1-build\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " pod="service-telemetry/sg-core-1-build" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.157997 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a36cb230-54e1-4799-a4a6-9009eaba532c-buildworkdir\") pod \"sg-core-1-build\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " pod="service-telemetry/sg-core-1-build" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.158023 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a36cb230-54e1-4799-a4a6-9009eaba532c-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " pod="service-telemetry/sg-core-1-build" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.158054 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a36cb230-54e1-4799-a4a6-9009eaba532c-build-system-configs\") pod \"sg-core-1-build\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " pod="service-telemetry/sg-core-1-build" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.259723 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-x4tfv\" (UniqueName: \"kubernetes.io/projected/a36cb230-54e1-4799-a4a6-9009eaba532c-kube-api-access-x4tfv\") pod \"sg-core-1-build\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " pod="service-telemetry/sg-core-1-build" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.259776 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/a36cb230-54e1-4799-a4a6-9009eaba532c-builder-dockercfg-hvzlm-pull\") pod \"sg-core-1-build\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " pod="service-telemetry/sg-core-1-build" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.259856 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a36cb230-54e1-4799-a4a6-9009eaba532c-buildworkdir\") pod \"sg-core-1-build\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " pod="service-telemetry/sg-core-1-build" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.259873 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a36cb230-54e1-4799-a4a6-9009eaba532c-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " pod="service-telemetry/sg-core-1-build" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.259908 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a36cb230-54e1-4799-a4a6-9009eaba532c-build-system-configs\") pod \"sg-core-1-build\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " pod="service-telemetry/sg-core-1-build" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.259938 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a36cb230-54e1-4799-a4a6-9009eaba532c-container-storage-root\") pod \"sg-core-1-build\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " pod="service-telemetry/sg-core-1-build" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.259985 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/a36cb230-54e1-4799-a4a6-9009eaba532c-builder-dockercfg-hvzlm-push\") pod \"sg-core-1-build\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " pod="service-telemetry/sg-core-1-build" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.260057 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a36cb230-54e1-4799-a4a6-9009eaba532c-buildcachedir\") pod \"sg-core-1-build\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " pod="service-telemetry/sg-core-1-build" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.260091 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a36cb230-54e1-4799-a4a6-9009eaba532c-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " pod="service-telemetry/sg-core-1-build" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.260110 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a36cb230-54e1-4799-a4a6-9009eaba532c-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " pod="service-telemetry/sg-core-1-build" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.260152 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgq6w\" (UniqueName: \"kubernetes.io/projected/3858bc47-7853-4b6a-b130-aea8f1f3e8c7-kube-api-access-tgq6w\") pod \"auto-csr-approver-29484726-c8lz2\" (UID: \"3858bc47-7853-4b6a-b130-aea8f1f3e8c7\") " pod="openshift-infra/auto-csr-approver-29484726-c8lz2" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.260191 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a36cb230-54e1-4799-a4a6-9009eaba532c-container-storage-run\") pod \"sg-core-1-build\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " pod="service-telemetry/sg-core-1-build" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.260231 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a36cb230-54e1-4799-a4a6-9009eaba532c-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " pod="service-telemetry/sg-core-1-build" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.261515 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a36cb230-54e1-4799-a4a6-9009eaba532c-buildcachedir\") pod \"sg-core-1-build\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " pod="service-telemetry/sg-core-1-build" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.261631 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a36cb230-54e1-4799-a4a6-9009eaba532c-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " pod="service-telemetry/sg-core-1-build" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.261722 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a36cb230-54e1-4799-a4a6-9009eaba532c-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " pod="service-telemetry/sg-core-1-build" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.262011 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a36cb230-54e1-4799-a4a6-9009eaba532c-buildworkdir\") pod \"sg-core-1-build\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " pod="service-telemetry/sg-core-1-build" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.262158 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a36cb230-54e1-4799-a4a6-9009eaba532c-container-storage-run\") pod \"sg-core-1-build\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " pod="service-telemetry/sg-core-1-build" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.262345 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a36cb230-54e1-4799-a4a6-9009eaba532c-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " pod="service-telemetry/sg-core-1-build" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.262392 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a36cb230-54e1-4799-a4a6-9009eaba532c-build-system-configs\") pod \"sg-core-1-build\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " pod="service-telemetry/sg-core-1-build" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.262509 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a36cb230-54e1-4799-a4a6-9009eaba532c-container-storage-root\") pod \"sg-core-1-build\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " pod="service-telemetry/sg-core-1-build" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.262724 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a36cb230-54e1-4799-a4a6-9009eaba532c-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " pod="service-telemetry/sg-core-1-build" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.271685 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/a36cb230-54e1-4799-a4a6-9009eaba532c-builder-dockercfg-hvzlm-push\") pod \"sg-core-1-build\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " pod="service-telemetry/sg-core-1-build" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.271712 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/a36cb230-54e1-4799-a4a6-9009eaba532c-builder-dockercfg-hvzlm-pull\") pod \"sg-core-1-build\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " pod="service-telemetry/sg-core-1-build" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.276173 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4tfv\" (UniqueName: \"kubernetes.io/projected/a36cb230-54e1-4799-a4a6-9009eaba532c-kube-api-access-x4tfv\") pod \"sg-core-1-build\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " pod="service-telemetry/sg-core-1-build" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.361207 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tgq6w\" (UniqueName: \"kubernetes.io/projected/3858bc47-7853-4b6a-b130-aea8f1f3e8c7-kube-api-access-tgq6w\") pod \"auto-csr-approver-29484726-c8lz2\" (UID: \"3858bc47-7853-4b6a-b130-aea8f1f3e8c7\") " pod="openshift-infra/auto-csr-approver-29484726-c8lz2" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.378655 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgq6w\" (UniqueName: \"kubernetes.io/projected/3858bc47-7853-4b6a-b130-aea8f1f3e8c7-kube-api-access-tgq6w\") pod \"auto-csr-approver-29484726-c8lz2\" (UID: \"3858bc47-7853-4b6a-b130-aea8f1f3e8c7\") " pod="openshift-infra/auto-csr-approver-29484726-c8lz2" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.441194 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.474017 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484726-c8lz2" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.720523 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.969640 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484726-c8lz2"] Jan 22 12:06:00 crc kubenswrapper[5120]: W0122 12:06:00.972055 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3858bc47_7853_4b6a_b130_aea8f1f3e8c7.slice/crio-6b1486016505dc35b6016c4ba38a156b7fcc1795c088d7045dc89d90a41a12c3 WatchSource:0}: Error finding container 6b1486016505dc35b6016c4ba38a156b7fcc1795c088d7045dc89d90a41a12c3: Status 404 returned error can't find the container with id 6b1486016505dc35b6016c4ba38a156b7fcc1795c088d7045dc89d90a41a12c3 Jan 22 12:06:01 crc kubenswrapper[5120]: I0122 12:06:01.326278 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484726-c8lz2" event={"ID":"3858bc47-7853-4b6a-b130-aea8f1f3e8c7","Type":"ContainerStarted","Data":"6b1486016505dc35b6016c4ba38a156b7fcc1795c088d7045dc89d90a41a12c3"} Jan 22 12:06:01 crc kubenswrapper[5120]: I0122 12:06:01.328107 5120 generic.go:358] "Generic (PLEG): container finished" podID="a36cb230-54e1-4799-a4a6-9009eaba532c" containerID="dc6449c955d62c9a3e099b456e3c0d923de6e758236bcfb769de9a44469f1bd0" exitCode=0 Jan 22 12:06:01 crc kubenswrapper[5120]: I0122 12:06:01.328235 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"a36cb230-54e1-4799-a4a6-9009eaba532c","Type":"ContainerDied","Data":"dc6449c955d62c9a3e099b456e3c0d923de6e758236bcfb769de9a44469f1bd0"} Jan 22 12:06:01 crc kubenswrapper[5120]: I0122 12:06:01.328345 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"a36cb230-54e1-4799-a4a6-9009eaba532c","Type":"ContainerStarted","Data":"7818e737ee5ed95e5328c0dfb23b10ce422c0f3ef74c8c4836187c64df4a40cb"} Jan 22 12:06:01 crc kubenswrapper[5120]: I0122 12:06:01.972373 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:06:01 crc kubenswrapper[5120]: I0122 12:06:01.972935 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:06:02 crc kubenswrapper[5120]: I0122 12:06:02.339302 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"a36cb230-54e1-4799-a4a6-9009eaba532c","Type":"ContainerStarted","Data":"0fffe95a71f122423e973d688f499e166896cffbe78136ee95397c29b861b628"} Jan 22 12:06:02 crc kubenswrapper[5120]: I0122 12:06:02.378051 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-core-1-build" podStartSLOduration=3.378012266 podStartE2EDuration="3.378012266s" podCreationTimestamp="2026-01-22 12:05:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 12:06:02.371824957 +0000 UTC m=+1097.115773318" watchObservedRunningTime="2026-01-22 12:06:02.378012266 +0000 UTC m=+1097.121960617" Jan 22 12:06:03 crc kubenswrapper[5120]: I0122 12:06:03.348805 5120 generic.go:358] "Generic (PLEG): container finished" podID="3858bc47-7853-4b6a-b130-aea8f1f3e8c7" containerID="23dd071c493eb18691c5ccc422d25241938024f9dc9c51c1c687fd54070a5cca" exitCode=0 Jan 22 12:06:03 crc kubenswrapper[5120]: I0122 12:06:03.348923 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484726-c8lz2" event={"ID":"3858bc47-7853-4b6a-b130-aea8f1f3e8c7","Type":"ContainerDied","Data":"23dd071c493eb18691c5ccc422d25241938024f9dc9c51c1c687fd54070a5cca"} Jan 22 12:06:04 crc kubenswrapper[5120]: I0122 12:06:04.637861 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484726-c8lz2" Jan 22 12:06:04 crc kubenswrapper[5120]: I0122 12:06:04.742362 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgq6w\" (UniqueName: \"kubernetes.io/projected/3858bc47-7853-4b6a-b130-aea8f1f3e8c7-kube-api-access-tgq6w\") pod \"3858bc47-7853-4b6a-b130-aea8f1f3e8c7\" (UID: \"3858bc47-7853-4b6a-b130-aea8f1f3e8c7\") " Jan 22 12:06:04 crc kubenswrapper[5120]: I0122 12:06:04.751300 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3858bc47-7853-4b6a-b130-aea8f1f3e8c7-kube-api-access-tgq6w" (OuterVolumeSpecName: "kube-api-access-tgq6w") pod "3858bc47-7853-4b6a-b130-aea8f1f3e8c7" (UID: "3858bc47-7853-4b6a-b130-aea8f1f3e8c7"). InnerVolumeSpecName "kube-api-access-tgq6w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:06:04 crc kubenswrapper[5120]: I0122 12:06:04.844216 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tgq6w\" (UniqueName: \"kubernetes.io/projected/3858bc47-7853-4b6a-b130-aea8f1f3e8c7-kube-api-access-tgq6w\") on node \"crc\" DevicePath \"\"" Jan 22 12:06:05 crc kubenswrapper[5120]: I0122 12:06:05.379755 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484726-c8lz2" event={"ID":"3858bc47-7853-4b6a-b130-aea8f1f3e8c7","Type":"ContainerDied","Data":"6b1486016505dc35b6016c4ba38a156b7fcc1795c088d7045dc89d90a41a12c3"} Jan 22 12:06:05 crc kubenswrapper[5120]: I0122 12:06:05.379830 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b1486016505dc35b6016c4ba38a156b7fcc1795c088d7045dc89d90a41a12c3" Jan 22 12:06:05 crc kubenswrapper[5120]: I0122 12:06:05.379779 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484726-c8lz2" Jan 22 12:06:05 crc kubenswrapper[5120]: I0122 12:06:05.728248 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29484720-f92nq"] Jan 22 12:06:05 crc kubenswrapper[5120]: I0122 12:06:05.734933 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29484720-f92nq"] Jan 22 12:06:07 crc kubenswrapper[5120]: I0122 12:06:07.589671 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee0a1780-1d96-46a3-8386-55404b6d1299" path="/var/lib/kubelet/pods/ee0a1780-1d96-46a3-8386-55404b6d1299/volumes" Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.185484 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.186378 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/sg-core-1-build" podUID="a36cb230-54e1-4799-a4a6-9009eaba532c" containerName="docker-build" containerID="cri-o://0fffe95a71f122423e973d688f499e166896cffbe78136ee95397c29b861b628" gracePeriod=30 Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.415237 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-1-build_a36cb230-54e1-4799-a4a6-9009eaba532c/docker-build/0.log" Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.415870 5120 generic.go:358] "Generic (PLEG): container finished" podID="a36cb230-54e1-4799-a4a6-9009eaba532c" containerID="0fffe95a71f122423e973d688f499e166896cffbe78136ee95397c29b861b628" exitCode=1 Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.415997 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"a36cb230-54e1-4799-a4a6-9009eaba532c","Type":"ContainerDied","Data":"0fffe95a71f122423e973d688f499e166896cffbe78136ee95397c29b861b628"} Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.618820 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-1-build_a36cb230-54e1-4799-a4a6-9009eaba532c/docker-build/0.log" Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.619206 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.740352 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a36cb230-54e1-4799-a4a6-9009eaba532c-container-storage-run\") pod \"a36cb230-54e1-4799-a4a6-9009eaba532c\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.740455 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/a36cb230-54e1-4799-a4a6-9009eaba532c-builder-dockercfg-hvzlm-pull\") pod \"a36cb230-54e1-4799-a4a6-9009eaba532c\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.740553 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/a36cb230-54e1-4799-a4a6-9009eaba532c-builder-dockercfg-hvzlm-push\") pod \"a36cb230-54e1-4799-a4a6-9009eaba532c\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.740641 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a36cb230-54e1-4799-a4a6-9009eaba532c-build-proxy-ca-bundles\") pod \"a36cb230-54e1-4799-a4a6-9009eaba532c\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.740755 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a36cb230-54e1-4799-a4a6-9009eaba532c-buildcachedir\") pod \"a36cb230-54e1-4799-a4a6-9009eaba532c\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.740798 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4tfv\" (UniqueName: \"kubernetes.io/projected/a36cb230-54e1-4799-a4a6-9009eaba532c-kube-api-access-x4tfv\") pod \"a36cb230-54e1-4799-a4a6-9009eaba532c\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.740839 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a36cb230-54e1-4799-a4a6-9009eaba532c-build-ca-bundles\") pod \"a36cb230-54e1-4799-a4a6-9009eaba532c\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.740890 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a36cb230-54e1-4799-a4a6-9009eaba532c-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "a36cb230-54e1-4799-a4a6-9009eaba532c" (UID: "a36cb230-54e1-4799-a4a6-9009eaba532c"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.740911 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a36cb230-54e1-4799-a4a6-9009eaba532c-build-system-configs\") pod \"a36cb230-54e1-4799-a4a6-9009eaba532c\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.741014 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a36cb230-54e1-4799-a4a6-9009eaba532c-node-pullsecrets\") pod \"a36cb230-54e1-4799-a4a6-9009eaba532c\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.741050 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a36cb230-54e1-4799-a4a6-9009eaba532c-build-blob-cache\") pod \"a36cb230-54e1-4799-a4a6-9009eaba532c\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.741090 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a36cb230-54e1-4799-a4a6-9009eaba532c-buildworkdir\") pod \"a36cb230-54e1-4799-a4a6-9009eaba532c\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.741136 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a36cb230-54e1-4799-a4a6-9009eaba532c-container-storage-root\") pod \"a36cb230-54e1-4799-a4a6-9009eaba532c\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.741220 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a36cb230-54e1-4799-a4a6-9009eaba532c-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "a36cb230-54e1-4799-a4a6-9009eaba532c" (UID: "a36cb230-54e1-4799-a4a6-9009eaba532c"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.742054 5120 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a36cb230-54e1-4799-a4a6-9009eaba532c-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.742076 5120 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a36cb230-54e1-4799-a4a6-9009eaba532c-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.742346 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a36cb230-54e1-4799-a4a6-9009eaba532c-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "a36cb230-54e1-4799-a4a6-9009eaba532c" (UID: "a36cb230-54e1-4799-a4a6-9009eaba532c"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.742368 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a36cb230-54e1-4799-a4a6-9009eaba532c-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "a36cb230-54e1-4799-a4a6-9009eaba532c" (UID: "a36cb230-54e1-4799-a4a6-9009eaba532c"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.742364 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a36cb230-54e1-4799-a4a6-9009eaba532c-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "a36cb230-54e1-4799-a4a6-9009eaba532c" (UID: "a36cb230-54e1-4799-a4a6-9009eaba532c"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.742776 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a36cb230-54e1-4799-a4a6-9009eaba532c-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "a36cb230-54e1-4799-a4a6-9009eaba532c" (UID: "a36cb230-54e1-4799-a4a6-9009eaba532c"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.744119 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a36cb230-54e1-4799-a4a6-9009eaba532c-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "a36cb230-54e1-4799-a4a6-9009eaba532c" (UID: "a36cb230-54e1-4799-a4a6-9009eaba532c"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.748833 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a36cb230-54e1-4799-a4a6-9009eaba532c-kube-api-access-x4tfv" (OuterVolumeSpecName: "kube-api-access-x4tfv") pod "a36cb230-54e1-4799-a4a6-9009eaba532c" (UID: "a36cb230-54e1-4799-a4a6-9009eaba532c"). InnerVolumeSpecName "kube-api-access-x4tfv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.748915 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a36cb230-54e1-4799-a4a6-9009eaba532c-builder-dockercfg-hvzlm-push" (OuterVolumeSpecName: "builder-dockercfg-hvzlm-push") pod "a36cb230-54e1-4799-a4a6-9009eaba532c" (UID: "a36cb230-54e1-4799-a4a6-9009eaba532c"). InnerVolumeSpecName "builder-dockercfg-hvzlm-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.749072 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a36cb230-54e1-4799-a4a6-9009eaba532c-builder-dockercfg-hvzlm-pull" (OuterVolumeSpecName: "builder-dockercfg-hvzlm-pull") pod "a36cb230-54e1-4799-a4a6-9009eaba532c" (UID: "a36cb230-54e1-4799-a4a6-9009eaba532c"). InnerVolumeSpecName "builder-dockercfg-hvzlm-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.817256 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a36cb230-54e1-4799-a4a6-9009eaba532c-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "a36cb230-54e1-4799-a4a6-9009eaba532c" (UID: "a36cb230-54e1-4799-a4a6-9009eaba532c"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.843414 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-x4tfv\" (UniqueName: \"kubernetes.io/projected/a36cb230-54e1-4799-a4a6-9009eaba532c-kube-api-access-x4tfv\") on node \"crc\" DevicePath \"\"" Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.843463 5120 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a36cb230-54e1-4799-a4a6-9009eaba532c-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.843504 5120 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a36cb230-54e1-4799-a4a6-9009eaba532c-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.843513 5120 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a36cb230-54e1-4799-a4a6-9009eaba532c-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.843523 5120 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a36cb230-54e1-4799-a4a6-9009eaba532c-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.843532 5120 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a36cb230-54e1-4799-a4a6-9009eaba532c-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.843543 5120 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/a36cb230-54e1-4799-a4a6-9009eaba532c-builder-dockercfg-hvzlm-pull\") on node \"crc\" DevicePath \"\"" Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.843557 5120 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/a36cb230-54e1-4799-a4a6-9009eaba532c-builder-dockercfg-hvzlm-push\") on node \"crc\" DevicePath \"\"" Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.843566 5120 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a36cb230-54e1-4799-a4a6-9009eaba532c-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.865880 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a36cb230-54e1-4799-a4a6-9009eaba532c-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "a36cb230-54e1-4799-a4a6-9009eaba532c" (UID: "a36cb230-54e1-4799-a4a6-9009eaba532c"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.944433 5120 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a36cb230-54e1-4799-a4a6-9009eaba532c-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 22 12:06:11 crc kubenswrapper[5120]: I0122 12:06:11.424604 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-1-build_a36cb230-54e1-4799-a4a6-9009eaba532c/docker-build/0.log" Jan 22 12:06:11 crc kubenswrapper[5120]: I0122 12:06:11.425152 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Jan 22 12:06:11 crc kubenswrapper[5120]: I0122 12:06:11.425201 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"a36cb230-54e1-4799-a4a6-9009eaba532c","Type":"ContainerDied","Data":"7818e737ee5ed95e5328c0dfb23b10ce422c0f3ef74c8c4836187c64df4a40cb"} Jan 22 12:06:11 crc kubenswrapper[5120]: I0122 12:06:11.425299 5120 scope.go:117] "RemoveContainer" containerID="0fffe95a71f122423e973d688f499e166896cffbe78136ee95397c29b861b628" Jan 22 12:06:11 crc kubenswrapper[5120]: I0122 12:06:11.466996 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 22 12:06:11 crc kubenswrapper[5120]: I0122 12:06:11.469980 5120 scope.go:117] "RemoveContainer" containerID="dc6449c955d62c9a3e099b456e3c0d923de6e758236bcfb769de9a44469f1bd0" Jan 22 12:06:11 crc kubenswrapper[5120]: I0122 12:06:11.475894 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 22 12:06:11 crc kubenswrapper[5120]: I0122 12:06:11.581435 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a36cb230-54e1-4799-a4a6-9009eaba532c" path="/var/lib/kubelet/pods/a36cb230-54e1-4799-a4a6-9009eaba532c/volumes" Jan 22 12:06:11 crc kubenswrapper[5120]: I0122 12:06:11.836454 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-core-2-build"] Jan 22 12:06:11 crc kubenswrapper[5120]: I0122 12:06:11.837722 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3858bc47-7853-4b6a-b130-aea8f1f3e8c7" containerName="oc" Jan 22 12:06:11 crc kubenswrapper[5120]: I0122 12:06:11.837753 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="3858bc47-7853-4b6a-b130-aea8f1f3e8c7" containerName="oc" Jan 22 12:06:11 crc kubenswrapper[5120]: I0122 12:06:11.837795 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a36cb230-54e1-4799-a4a6-9009eaba532c" containerName="docker-build" Jan 22 12:06:11 crc kubenswrapper[5120]: I0122 12:06:11.837804 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="a36cb230-54e1-4799-a4a6-9009eaba532c" containerName="docker-build" Jan 22 12:06:11 crc kubenswrapper[5120]: I0122 12:06:11.837817 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a36cb230-54e1-4799-a4a6-9009eaba532c" containerName="manage-dockerfile" Jan 22 12:06:11 crc kubenswrapper[5120]: I0122 12:06:11.837826 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="a36cb230-54e1-4799-a4a6-9009eaba532c" containerName="manage-dockerfile" Jan 22 12:06:11 crc kubenswrapper[5120]: I0122 12:06:11.837974 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="a36cb230-54e1-4799-a4a6-9009eaba532c" containerName="docker-build" Jan 22 12:06:11 crc kubenswrapper[5120]: I0122 12:06:11.837993 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="3858bc47-7853-4b6a-b130-aea8f1f3e8c7" containerName="oc" Jan 22 12:06:11 crc kubenswrapper[5120]: I0122 12:06:11.858227 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-2-build"] Jan 22 12:06:11 crc kubenswrapper[5120]: I0122 12:06:11.858462 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Jan 22 12:06:11 crc kubenswrapper[5120]: I0122 12:06:11.862702 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-core-2-sys-config\"" Jan 22 12:06:11 crc kubenswrapper[5120]: I0122 12:06:11.862742 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-core-2-ca\"" Jan 22 12:06:11 crc kubenswrapper[5120]: I0122 12:06:11.863138 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-hvzlm\"" Jan 22 12:06:11 crc kubenswrapper[5120]: I0122 12:06:11.864510 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-core-2-global-ca\"" Jan 22 12:06:11 crc kubenswrapper[5120]: I0122 12:06:11.961871 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/4f1f5ecd-00ad-4747-b1eb-d701595508ad-builder-dockercfg-hvzlm-pull\") pod \"sg-core-2-build\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " pod="service-telemetry/sg-core-2-build" Jan 22 12:06:11 crc kubenswrapper[5120]: I0122 12:06:11.961981 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/4f1f5ecd-00ad-4747-b1eb-d701595508ad-build-system-configs\") pod \"sg-core-2-build\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " pod="service-telemetry/sg-core-2-build" Jan 22 12:06:11 crc kubenswrapper[5120]: I0122 12:06:11.962044 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4f1f5ecd-00ad-4747-b1eb-d701595508ad-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " pod="service-telemetry/sg-core-2-build" Jan 22 12:06:11 crc kubenswrapper[5120]: I0122 12:06:11.962099 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/4f1f5ecd-00ad-4747-b1eb-d701595508ad-buildworkdir\") pod \"sg-core-2-build\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " pod="service-telemetry/sg-core-2-build" Jan 22 12:06:11 crc kubenswrapper[5120]: I0122 12:06:11.962140 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/4f1f5ecd-00ad-4747-b1eb-d701595508ad-buildcachedir\") pod \"sg-core-2-build\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " pod="service-telemetry/sg-core-2-build" Jan 22 12:06:11 crc kubenswrapper[5120]: I0122 12:06:11.962175 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4f1f5ecd-00ad-4747-b1eb-d701595508ad-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " pod="service-telemetry/sg-core-2-build" Jan 22 12:06:11 crc kubenswrapper[5120]: I0122 12:06:11.962206 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/4f1f5ecd-00ad-4747-b1eb-d701595508ad-container-storage-root\") pod \"sg-core-2-build\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " pod="service-telemetry/sg-core-2-build" Jan 22 12:06:11 crc kubenswrapper[5120]: I0122 12:06:11.962262 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4f1f5ecd-00ad-4747-b1eb-d701595508ad-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " pod="service-telemetry/sg-core-2-build" Jan 22 12:06:11 crc kubenswrapper[5120]: I0122 12:06:11.962311 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/4f1f5ecd-00ad-4747-b1eb-d701595508ad-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " pod="service-telemetry/sg-core-2-build" Jan 22 12:06:11 crc kubenswrapper[5120]: I0122 12:06:11.962386 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/4f1f5ecd-00ad-4747-b1eb-d701595508ad-builder-dockercfg-hvzlm-push\") pod \"sg-core-2-build\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " pod="service-telemetry/sg-core-2-build" Jan 22 12:06:11 crc kubenswrapper[5120]: I0122 12:06:11.962438 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2sgm\" (UniqueName: \"kubernetes.io/projected/4f1f5ecd-00ad-4747-b1eb-d701595508ad-kube-api-access-b2sgm\") pod \"sg-core-2-build\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " pod="service-telemetry/sg-core-2-build" Jan 22 12:06:11 crc kubenswrapper[5120]: I0122 12:06:11.962480 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/4f1f5ecd-00ad-4747-b1eb-d701595508ad-container-storage-run\") pod \"sg-core-2-build\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " pod="service-telemetry/sg-core-2-build" Jan 22 12:06:12 crc kubenswrapper[5120]: I0122 12:06:12.064509 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/4f1f5ecd-00ad-4747-b1eb-d701595508ad-builder-dockercfg-hvzlm-pull\") pod \"sg-core-2-build\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " pod="service-telemetry/sg-core-2-build" Jan 22 12:06:12 crc kubenswrapper[5120]: I0122 12:06:12.064584 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/4f1f5ecd-00ad-4747-b1eb-d701595508ad-build-system-configs\") pod \"sg-core-2-build\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " pod="service-telemetry/sg-core-2-build" Jan 22 12:06:12 crc kubenswrapper[5120]: I0122 12:06:12.064614 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4f1f5ecd-00ad-4747-b1eb-d701595508ad-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " pod="service-telemetry/sg-core-2-build" Jan 22 12:06:12 crc kubenswrapper[5120]: I0122 12:06:12.064837 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/4f1f5ecd-00ad-4747-b1eb-d701595508ad-buildworkdir\") pod \"sg-core-2-build\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " pod="service-telemetry/sg-core-2-build" Jan 22 12:06:12 crc kubenswrapper[5120]: I0122 12:06:12.064972 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/4f1f5ecd-00ad-4747-b1eb-d701595508ad-buildcachedir\") pod \"sg-core-2-build\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " pod="service-telemetry/sg-core-2-build" Jan 22 12:06:12 crc kubenswrapper[5120]: I0122 12:06:12.065009 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4f1f5ecd-00ad-4747-b1eb-d701595508ad-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " pod="service-telemetry/sg-core-2-build" Jan 22 12:06:12 crc kubenswrapper[5120]: I0122 12:06:12.065081 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/4f1f5ecd-00ad-4747-b1eb-d701595508ad-buildcachedir\") pod \"sg-core-2-build\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " pod="service-telemetry/sg-core-2-build" Jan 22 12:06:12 crc kubenswrapper[5120]: I0122 12:06:12.065126 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/4f1f5ecd-00ad-4747-b1eb-d701595508ad-container-storage-root\") pod \"sg-core-2-build\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " pod="service-telemetry/sg-core-2-build" Jan 22 12:06:12 crc kubenswrapper[5120]: I0122 12:06:12.065124 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4f1f5ecd-00ad-4747-b1eb-d701595508ad-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " pod="service-telemetry/sg-core-2-build" Jan 22 12:06:12 crc kubenswrapper[5120]: I0122 12:06:12.065226 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4f1f5ecd-00ad-4747-b1eb-d701595508ad-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " pod="service-telemetry/sg-core-2-build" Jan 22 12:06:12 crc kubenswrapper[5120]: I0122 12:06:12.065280 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/4f1f5ecd-00ad-4747-b1eb-d701595508ad-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " pod="service-telemetry/sg-core-2-build" Jan 22 12:06:12 crc kubenswrapper[5120]: I0122 12:06:12.065339 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/4f1f5ecd-00ad-4747-b1eb-d701595508ad-builder-dockercfg-hvzlm-push\") pod \"sg-core-2-build\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " pod="service-telemetry/sg-core-2-build" Jan 22 12:06:12 crc kubenswrapper[5120]: I0122 12:06:12.065374 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-b2sgm\" (UniqueName: \"kubernetes.io/projected/4f1f5ecd-00ad-4747-b1eb-d701595508ad-kube-api-access-b2sgm\") pod \"sg-core-2-build\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " pod="service-telemetry/sg-core-2-build" Jan 22 12:06:12 crc kubenswrapper[5120]: I0122 12:06:12.065403 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/4f1f5ecd-00ad-4747-b1eb-d701595508ad-container-storage-run\") pod \"sg-core-2-build\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " pod="service-telemetry/sg-core-2-build" Jan 22 12:06:12 crc kubenswrapper[5120]: I0122 12:06:12.065883 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/4f1f5ecd-00ad-4747-b1eb-d701595508ad-buildworkdir\") pod \"sg-core-2-build\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " pod="service-telemetry/sg-core-2-build" Jan 22 12:06:12 crc kubenswrapper[5120]: I0122 12:06:12.065928 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/4f1f5ecd-00ad-4747-b1eb-d701595508ad-build-system-configs\") pod \"sg-core-2-build\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " pod="service-telemetry/sg-core-2-build" Jan 22 12:06:12 crc kubenswrapper[5120]: I0122 12:06:12.066275 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/4f1f5ecd-00ad-4747-b1eb-d701595508ad-container-storage-root\") pod \"sg-core-2-build\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " pod="service-telemetry/sg-core-2-build" Jan 22 12:06:12 crc kubenswrapper[5120]: I0122 12:06:12.066514 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/4f1f5ecd-00ad-4747-b1eb-d701595508ad-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " pod="service-telemetry/sg-core-2-build" Jan 22 12:06:12 crc kubenswrapper[5120]: I0122 12:06:12.067217 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4f1f5ecd-00ad-4747-b1eb-d701595508ad-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " pod="service-telemetry/sg-core-2-build" Jan 22 12:06:12 crc kubenswrapper[5120]: I0122 12:06:12.067248 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/4f1f5ecd-00ad-4747-b1eb-d701595508ad-container-storage-run\") pod \"sg-core-2-build\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " pod="service-telemetry/sg-core-2-build" Jan 22 12:06:12 crc kubenswrapper[5120]: I0122 12:06:12.067314 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4f1f5ecd-00ad-4747-b1eb-d701595508ad-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " pod="service-telemetry/sg-core-2-build" Jan 22 12:06:12 crc kubenswrapper[5120]: I0122 12:06:12.073331 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/4f1f5ecd-00ad-4747-b1eb-d701595508ad-builder-dockercfg-hvzlm-pull\") pod \"sg-core-2-build\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " pod="service-telemetry/sg-core-2-build" Jan 22 12:06:12 crc kubenswrapper[5120]: I0122 12:06:12.076227 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/4f1f5ecd-00ad-4747-b1eb-d701595508ad-builder-dockercfg-hvzlm-push\") pod \"sg-core-2-build\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " pod="service-telemetry/sg-core-2-build" Jan 22 12:06:12 crc kubenswrapper[5120]: I0122 12:06:12.104741 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2sgm\" (UniqueName: \"kubernetes.io/projected/4f1f5ecd-00ad-4747-b1eb-d701595508ad-kube-api-access-b2sgm\") pod \"sg-core-2-build\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " pod="service-telemetry/sg-core-2-build" Jan 22 12:06:12 crc kubenswrapper[5120]: I0122 12:06:12.184270 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Jan 22 12:06:12 crc kubenswrapper[5120]: I0122 12:06:12.449202 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-2-build"] Jan 22 12:06:12 crc kubenswrapper[5120]: W0122 12:06:12.452278 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f1f5ecd_00ad_4747_b1eb_d701595508ad.slice/crio-eb5c6f13316b5753739a96c02f22620ad9a7455959acbd68aed8bb15ee7d4bbd WatchSource:0}: Error finding container eb5c6f13316b5753739a96c02f22620ad9a7455959acbd68aed8bb15ee7d4bbd: Status 404 returned error can't find the container with id eb5c6f13316b5753739a96c02f22620ad9a7455959acbd68aed8bb15ee7d4bbd Jan 22 12:06:13 crc kubenswrapper[5120]: I0122 12:06:13.456895 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"4f1f5ecd-00ad-4747-b1eb-d701595508ad","Type":"ContainerStarted","Data":"dfa599db095e5f8a988903fd8f1e1dd510e7a0654e6e4c200c8220e36442bda6"} Jan 22 12:06:13 crc kubenswrapper[5120]: I0122 12:06:13.456993 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"4f1f5ecd-00ad-4747-b1eb-d701595508ad","Type":"ContainerStarted","Data":"eb5c6f13316b5753739a96c02f22620ad9a7455959acbd68aed8bb15ee7d4bbd"} Jan 22 12:06:14 crc kubenswrapper[5120]: I0122 12:06:14.468578 5120 generic.go:358] "Generic (PLEG): container finished" podID="4f1f5ecd-00ad-4747-b1eb-d701595508ad" containerID="dfa599db095e5f8a988903fd8f1e1dd510e7a0654e6e4c200c8220e36442bda6" exitCode=0 Jan 22 12:06:14 crc kubenswrapper[5120]: I0122 12:06:14.468721 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"4f1f5ecd-00ad-4747-b1eb-d701595508ad","Type":"ContainerDied","Data":"dfa599db095e5f8a988903fd8f1e1dd510e7a0654e6e4c200c8220e36442bda6"} Jan 22 12:06:15 crc kubenswrapper[5120]: I0122 12:06:15.478445 5120 generic.go:358] "Generic (PLEG): container finished" podID="4f1f5ecd-00ad-4747-b1eb-d701595508ad" containerID="91730ac168074586a3bede1ac6f7a0e951dd552d1fc754cd02e012bb515ca1c7" exitCode=0 Jan 22 12:06:15 crc kubenswrapper[5120]: I0122 12:06:15.478556 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"4f1f5ecd-00ad-4747-b1eb-d701595508ad","Type":"ContainerDied","Data":"91730ac168074586a3bede1ac6f7a0e951dd552d1fc754cd02e012bb515ca1c7"} Jan 22 12:06:15 crc kubenswrapper[5120]: I0122 12:06:15.515543 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-2-build_4f1f5ecd-00ad-4747-b1eb-d701595508ad/manage-dockerfile/0.log" Jan 22 12:06:16 crc kubenswrapper[5120]: I0122 12:06:16.493553 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"4f1f5ecd-00ad-4747-b1eb-d701595508ad","Type":"ContainerStarted","Data":"b9f7e397919ba3cd7982a08e93e44e47e51c825517d4db01db3c212592a32a58"} Jan 22 12:06:16 crc kubenswrapper[5120]: I0122 12:06:16.542789 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-core-2-build" podStartSLOduration=5.5427550629999995 podStartE2EDuration="5.542755063s" podCreationTimestamp="2026-01-22 12:06:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 12:06:16.530863548 +0000 UTC m=+1111.274811929" watchObservedRunningTime="2026-01-22 12:06:16.542755063 +0000 UTC m=+1111.286703444" Jan 22 12:06:31 crc kubenswrapper[5120]: I0122 12:06:31.972392 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:06:31 crc kubenswrapper[5120]: I0122 12:06:31.973100 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:06:31 crc kubenswrapper[5120]: I0122 12:06:31.973158 5120 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dq269" Jan 22 12:06:31 crc kubenswrapper[5120]: I0122 12:06:31.973784 5120 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"853669b192f5827170a3bbd5818f19fbda7dd2bb66abdc7a7f19541d0bf117e7"} pod="openshift-machine-config-operator/machine-config-daemon-dq269" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 12:06:31 crc kubenswrapper[5120]: I0122 12:06:31.973842 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" containerID="cri-o://853669b192f5827170a3bbd5818f19fbda7dd2bb66abdc7a7f19541d0bf117e7" gracePeriod=600 Jan 22 12:06:32 crc kubenswrapper[5120]: I0122 12:06:32.606901 5120 generic.go:358] "Generic (PLEG): container finished" podID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerID="853669b192f5827170a3bbd5818f19fbda7dd2bb66abdc7a7f19541d0bf117e7" exitCode=0 Jan 22 12:06:32 crc kubenswrapper[5120]: I0122 12:06:32.607536 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" event={"ID":"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9","Type":"ContainerDied","Data":"853669b192f5827170a3bbd5818f19fbda7dd2bb66abdc7a7f19541d0bf117e7"} Jan 22 12:06:32 crc kubenswrapper[5120]: I0122 12:06:32.607568 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" event={"ID":"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9","Type":"ContainerStarted","Data":"0ce45fe111abe3fb25265c0d4114782f8899115da5ec0e060bbf1264c0bf05d4"} Jan 22 12:06:32 crc kubenswrapper[5120]: I0122 12:06:32.607590 5120 scope.go:117] "RemoveContainer" containerID="7b1b1dbcaf6053c4f4e587f597b1d0bcb38e183b1d64f8acf48abb200ec2450a" Jan 22 12:06:54 crc kubenswrapper[5120]: I0122 12:06:54.757183 5120 scope.go:117] "RemoveContainer" containerID="a76aaf951602603ba06dd3faa64300e242c288026ffa56088b05a6f5a164c1d1" Jan 22 12:07:46 crc kubenswrapper[5120]: I0122 12:07:46.018648 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4lzht_67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087/kube-multus/0.log" Jan 22 12:07:46 crc kubenswrapper[5120]: I0122 12:07:46.021877 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4lzht_67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087/kube-multus/0.log" Jan 22 12:07:46 crc kubenswrapper[5120]: I0122 12:07:46.032195 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 22 12:07:46 crc kubenswrapper[5120]: I0122 12:07:46.032448 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 22 12:08:00 crc kubenswrapper[5120]: I0122 12:08:00.138288 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29484728-j8w4j"] Jan 22 12:08:00 crc kubenswrapper[5120]: I0122 12:08:00.269936 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484728-j8w4j"] Jan 22 12:08:00 crc kubenswrapper[5120]: I0122 12:08:00.270125 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484728-j8w4j" Jan 22 12:08:00 crc kubenswrapper[5120]: I0122 12:08:00.277456 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 22 12:08:00 crc kubenswrapper[5120]: I0122 12:08:00.277592 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 22 12:08:00 crc kubenswrapper[5120]: I0122 12:08:00.277646 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g2chw\"" Jan 22 12:08:00 crc kubenswrapper[5120]: I0122 12:08:00.312910 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tfrh\" (UniqueName: \"kubernetes.io/projected/ba296aaf-56d0-49e4-b647-aae80f6fbd52-kube-api-access-2tfrh\") pod \"auto-csr-approver-29484728-j8w4j\" (UID: \"ba296aaf-56d0-49e4-b647-aae80f6fbd52\") " pod="openshift-infra/auto-csr-approver-29484728-j8w4j" Jan 22 12:08:00 crc kubenswrapper[5120]: I0122 12:08:00.414171 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2tfrh\" (UniqueName: \"kubernetes.io/projected/ba296aaf-56d0-49e4-b647-aae80f6fbd52-kube-api-access-2tfrh\") pod \"auto-csr-approver-29484728-j8w4j\" (UID: \"ba296aaf-56d0-49e4-b647-aae80f6fbd52\") " pod="openshift-infra/auto-csr-approver-29484728-j8w4j" Jan 22 12:08:00 crc kubenswrapper[5120]: I0122 12:08:00.456527 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tfrh\" (UniqueName: \"kubernetes.io/projected/ba296aaf-56d0-49e4-b647-aae80f6fbd52-kube-api-access-2tfrh\") pod \"auto-csr-approver-29484728-j8w4j\" (UID: \"ba296aaf-56d0-49e4-b647-aae80f6fbd52\") " pod="openshift-infra/auto-csr-approver-29484728-j8w4j" Jan 22 12:08:00 crc kubenswrapper[5120]: I0122 12:08:00.588529 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484728-j8w4j" Jan 22 12:08:00 crc kubenswrapper[5120]: I0122 12:08:00.858037 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484728-j8w4j"] Jan 22 12:08:01 crc kubenswrapper[5120]: I0122 12:08:01.428600 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484728-j8w4j" event={"ID":"ba296aaf-56d0-49e4-b647-aae80f6fbd52","Type":"ContainerStarted","Data":"714967c8508e8b311da357f9c3b2c7250bcc38f92e52892f6dc0da12fc91017a"} Jan 22 12:08:02 crc kubenswrapper[5120]: I0122 12:08:02.437661 5120 generic.go:358] "Generic (PLEG): container finished" podID="ba296aaf-56d0-49e4-b647-aae80f6fbd52" containerID="8c734d96e4b1f47996c023313a0ce278e60832df482833ed84ccfa06214e5cc6" exitCode=0 Jan 22 12:08:02 crc kubenswrapper[5120]: I0122 12:08:02.437743 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484728-j8w4j" event={"ID":"ba296aaf-56d0-49e4-b647-aae80f6fbd52","Type":"ContainerDied","Data":"8c734d96e4b1f47996c023313a0ce278e60832df482833ed84ccfa06214e5cc6"} Jan 22 12:08:03 crc kubenswrapper[5120]: I0122 12:08:03.728623 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484728-j8w4j" Jan 22 12:08:03 crc kubenswrapper[5120]: I0122 12:08:03.871756 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2tfrh\" (UniqueName: \"kubernetes.io/projected/ba296aaf-56d0-49e4-b647-aae80f6fbd52-kube-api-access-2tfrh\") pod \"ba296aaf-56d0-49e4-b647-aae80f6fbd52\" (UID: \"ba296aaf-56d0-49e4-b647-aae80f6fbd52\") " Jan 22 12:08:03 crc kubenswrapper[5120]: I0122 12:08:03.879389 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba296aaf-56d0-49e4-b647-aae80f6fbd52-kube-api-access-2tfrh" (OuterVolumeSpecName: "kube-api-access-2tfrh") pod "ba296aaf-56d0-49e4-b647-aae80f6fbd52" (UID: "ba296aaf-56d0-49e4-b647-aae80f6fbd52"). InnerVolumeSpecName "kube-api-access-2tfrh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:08:03 crc kubenswrapper[5120]: I0122 12:08:03.973632 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2tfrh\" (UniqueName: \"kubernetes.io/projected/ba296aaf-56d0-49e4-b647-aae80f6fbd52-kube-api-access-2tfrh\") on node \"crc\" DevicePath \"\"" Jan 22 12:08:04 crc kubenswrapper[5120]: I0122 12:08:04.475018 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484728-j8w4j" event={"ID":"ba296aaf-56d0-49e4-b647-aae80f6fbd52","Type":"ContainerDied","Data":"714967c8508e8b311da357f9c3b2c7250bcc38f92e52892f6dc0da12fc91017a"} Jan 22 12:08:04 crc kubenswrapper[5120]: I0122 12:08:04.475091 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="714967c8508e8b311da357f9c3b2c7250bcc38f92e52892f6dc0da12fc91017a" Jan 22 12:08:04 crc kubenswrapper[5120]: I0122 12:08:04.475201 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484728-j8w4j" Jan 22 12:08:04 crc kubenswrapper[5120]: I0122 12:08:04.816541 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29484722-4kg69"] Jan 22 12:08:04 crc kubenswrapper[5120]: I0122 12:08:04.825758 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29484722-4kg69"] Jan 22 12:08:05 crc kubenswrapper[5120]: I0122 12:08:05.579834 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="724f8cf0-a6c6-45cf-932a-0bdc0247b38f" path="/var/lib/kubelet/pods/724f8cf0-a6c6-45cf-932a-0bdc0247b38f/volumes" Jan 22 12:08:54 crc kubenswrapper[5120]: I0122 12:08:54.901254 5120 scope.go:117] "RemoveContainer" containerID="ceb1fb8314d94f06df7d317cf94cdc9dbae9c56f894e19873a0c9d4b5ac76d19" Jan 22 12:09:01 crc kubenswrapper[5120]: I0122 12:09:01.973070 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:09:01 crc kubenswrapper[5120]: I0122 12:09:01.975632 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:09:31 crc kubenswrapper[5120]: I0122 12:09:31.973321 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:09:31 crc kubenswrapper[5120]: I0122 12:09:31.974346 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:10:00 crc kubenswrapper[5120]: I0122 12:10:00.152054 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29484730-z4qj9"] Jan 22 12:10:00 crc kubenswrapper[5120]: I0122 12:10:00.154038 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ba296aaf-56d0-49e4-b647-aae80f6fbd52" containerName="oc" Jan 22 12:10:00 crc kubenswrapper[5120]: I0122 12:10:00.154064 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba296aaf-56d0-49e4-b647-aae80f6fbd52" containerName="oc" Jan 22 12:10:00 crc kubenswrapper[5120]: I0122 12:10:00.154255 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="ba296aaf-56d0-49e4-b647-aae80f6fbd52" containerName="oc" Jan 22 12:10:01 crc kubenswrapper[5120]: I0122 12:10:01.221744 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484730-z4qj9"] Jan 22 12:10:01 crc kubenswrapper[5120]: I0122 12:10:01.222229 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484730-z4qj9" Jan 22 12:10:01 crc kubenswrapper[5120]: I0122 12:10:01.226843 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 22 12:10:01 crc kubenswrapper[5120]: I0122 12:10:01.227423 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 22 12:10:01 crc kubenswrapper[5120]: I0122 12:10:01.226880 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g2chw\"" Jan 22 12:10:01 crc kubenswrapper[5120]: I0122 12:10:01.366237 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztxvh\" (UniqueName: \"kubernetes.io/projected/86fa02fb-d5af-46f8-b19a-9af5fd7e5353-kube-api-access-ztxvh\") pod \"auto-csr-approver-29484730-z4qj9\" (UID: \"86fa02fb-d5af-46f8-b19a-9af5fd7e5353\") " pod="openshift-infra/auto-csr-approver-29484730-z4qj9" Jan 22 12:10:01 crc kubenswrapper[5120]: I0122 12:10:01.467606 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ztxvh\" (UniqueName: \"kubernetes.io/projected/86fa02fb-d5af-46f8-b19a-9af5fd7e5353-kube-api-access-ztxvh\") pod \"auto-csr-approver-29484730-z4qj9\" (UID: \"86fa02fb-d5af-46f8-b19a-9af5fd7e5353\") " pod="openshift-infra/auto-csr-approver-29484730-z4qj9" Jan 22 12:10:01 crc kubenswrapper[5120]: I0122 12:10:01.503197 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztxvh\" (UniqueName: \"kubernetes.io/projected/86fa02fb-d5af-46f8-b19a-9af5fd7e5353-kube-api-access-ztxvh\") pod \"auto-csr-approver-29484730-z4qj9\" (UID: \"86fa02fb-d5af-46f8-b19a-9af5fd7e5353\") " pod="openshift-infra/auto-csr-approver-29484730-z4qj9" Jan 22 12:10:01 crc kubenswrapper[5120]: I0122 12:10:01.558934 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484730-z4qj9" Jan 22 12:10:01 crc kubenswrapper[5120]: I0122 12:10:01.827785 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484730-z4qj9"] Jan 22 12:10:01 crc kubenswrapper[5120]: I0122 12:10:01.839292 5120 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 12:10:01 crc kubenswrapper[5120]: I0122 12:10:01.973400 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:10:01 crc kubenswrapper[5120]: I0122 12:10:01.973512 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:10:01 crc kubenswrapper[5120]: I0122 12:10:01.973575 5120 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dq269" Jan 22 12:10:01 crc kubenswrapper[5120]: I0122 12:10:01.974616 5120 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0ce45fe111abe3fb25265c0d4114782f8899115da5ec0e060bbf1264c0bf05d4"} pod="openshift-machine-config-operator/machine-config-daemon-dq269" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 12:10:01 crc kubenswrapper[5120]: I0122 12:10:01.974921 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" containerID="cri-o://0ce45fe111abe3fb25265c0d4114782f8899115da5ec0e060bbf1264c0bf05d4" gracePeriod=600 Jan 22 12:10:02 crc kubenswrapper[5120]: I0122 12:10:02.461284 5120 generic.go:358] "Generic (PLEG): container finished" podID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerID="0ce45fe111abe3fb25265c0d4114782f8899115da5ec0e060bbf1264c0bf05d4" exitCode=0 Jan 22 12:10:02 crc kubenswrapper[5120]: I0122 12:10:02.461378 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" event={"ID":"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9","Type":"ContainerDied","Data":"0ce45fe111abe3fb25265c0d4114782f8899115da5ec0e060bbf1264c0bf05d4"} Jan 22 12:10:02 crc kubenswrapper[5120]: I0122 12:10:02.462031 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" event={"ID":"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9","Type":"ContainerStarted","Data":"719354116d7ea0573a90aa1ae4bf7fd19ddeee3f2ea6145219b58e58618f132f"} Jan 22 12:10:02 crc kubenswrapper[5120]: I0122 12:10:02.462064 5120 scope.go:117] "RemoveContainer" containerID="853669b192f5827170a3bbd5818f19fbda7dd2bb66abdc7a7f19541d0bf117e7" Jan 22 12:10:02 crc kubenswrapper[5120]: I0122 12:10:02.464519 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484730-z4qj9" event={"ID":"86fa02fb-d5af-46f8-b19a-9af5fd7e5353","Type":"ContainerStarted","Data":"b3f8f1387e023435dd3460361245306121979c47430ee1623d66a3ecdb1e5896"} Jan 22 12:10:03 crc kubenswrapper[5120]: I0122 12:10:03.472987 5120 generic.go:358] "Generic (PLEG): container finished" podID="86fa02fb-d5af-46f8-b19a-9af5fd7e5353" containerID="e435702e7c696c62fc24675d08a9198377bd5a0c61f1adb503efe9265edbf5bd" exitCode=0 Jan 22 12:10:03 crc kubenswrapper[5120]: I0122 12:10:03.473128 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484730-z4qj9" event={"ID":"86fa02fb-d5af-46f8-b19a-9af5fd7e5353","Type":"ContainerDied","Data":"e435702e7c696c62fc24675d08a9198377bd5a0c61f1adb503efe9265edbf5bd"} Jan 22 12:10:04 crc kubenswrapper[5120]: I0122 12:10:04.730527 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484730-z4qj9" Jan 22 12:10:04 crc kubenswrapper[5120]: I0122 12:10:04.822261 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztxvh\" (UniqueName: \"kubernetes.io/projected/86fa02fb-d5af-46f8-b19a-9af5fd7e5353-kube-api-access-ztxvh\") pod \"86fa02fb-d5af-46f8-b19a-9af5fd7e5353\" (UID: \"86fa02fb-d5af-46f8-b19a-9af5fd7e5353\") " Jan 22 12:10:04 crc kubenswrapper[5120]: I0122 12:10:04.833501 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86fa02fb-d5af-46f8-b19a-9af5fd7e5353-kube-api-access-ztxvh" (OuterVolumeSpecName: "kube-api-access-ztxvh") pod "86fa02fb-d5af-46f8-b19a-9af5fd7e5353" (UID: "86fa02fb-d5af-46f8-b19a-9af5fd7e5353"). InnerVolumeSpecName "kube-api-access-ztxvh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:10:04 crc kubenswrapper[5120]: I0122 12:10:04.924117 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ztxvh\" (UniqueName: \"kubernetes.io/projected/86fa02fb-d5af-46f8-b19a-9af5fd7e5353-kube-api-access-ztxvh\") on node \"crc\" DevicePath \"\"" Jan 22 12:10:05 crc kubenswrapper[5120]: I0122 12:10:05.499593 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484730-z4qj9" event={"ID":"86fa02fb-d5af-46f8-b19a-9af5fd7e5353","Type":"ContainerDied","Data":"b3f8f1387e023435dd3460361245306121979c47430ee1623d66a3ecdb1e5896"} Jan 22 12:10:05 crc kubenswrapper[5120]: I0122 12:10:05.499638 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3f8f1387e023435dd3460361245306121979c47430ee1623d66a3ecdb1e5896" Jan 22 12:10:05 crc kubenswrapper[5120]: I0122 12:10:05.499677 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484730-z4qj9" Jan 22 12:10:05 crc kubenswrapper[5120]: I0122 12:10:05.810968 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29484724-5shbh"] Jan 22 12:10:05 crc kubenswrapper[5120]: I0122 12:10:05.816561 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29484724-5shbh"] Jan 22 12:10:07 crc kubenswrapper[5120]: I0122 12:10:07.598666 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b86909ba-6fe2-4fdd-994d-e5014840c597" path="/var/lib/kubelet/pods/b86909ba-6fe2-4fdd-994d-e5014840c597/volumes" Jan 22 12:10:09 crc kubenswrapper[5120]: I0122 12:10:09.547849 5120 generic.go:358] "Generic (PLEG): container finished" podID="4f1f5ecd-00ad-4747-b1eb-d701595508ad" containerID="b9f7e397919ba3cd7982a08e93e44e47e51c825517d4db01db3c212592a32a58" exitCode=0 Jan 22 12:10:09 crc kubenswrapper[5120]: I0122 12:10:09.547987 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"4f1f5ecd-00ad-4747-b1eb-d701595508ad","Type":"ContainerDied","Data":"b9f7e397919ba3cd7982a08e93e44e47e51c825517d4db01db3c212592a32a58"} Jan 22 12:10:10 crc kubenswrapper[5120]: I0122 12:10:10.820179 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Jan 22 12:10:10 crc kubenswrapper[5120]: I0122 12:10:10.921894 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/4f1f5ecd-00ad-4747-b1eb-d701595508ad-builder-dockercfg-hvzlm-push\") pod \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " Jan 22 12:10:10 crc kubenswrapper[5120]: I0122 12:10:10.922001 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/4f1f5ecd-00ad-4747-b1eb-d701595508ad-build-system-configs\") pod \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " Jan 22 12:10:10 crc kubenswrapper[5120]: I0122 12:10:10.922032 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/4f1f5ecd-00ad-4747-b1eb-d701595508ad-buildworkdir\") pod \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " Jan 22 12:10:10 crc kubenswrapper[5120]: I0122 12:10:10.922096 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4f1f5ecd-00ad-4747-b1eb-d701595508ad-build-ca-bundles\") pod \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " Jan 22 12:10:10 crc kubenswrapper[5120]: I0122 12:10:10.922119 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/4f1f5ecd-00ad-4747-b1eb-d701595508ad-container-storage-root\") pod \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " Jan 22 12:10:10 crc kubenswrapper[5120]: I0122 12:10:10.922211 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/4f1f5ecd-00ad-4747-b1eb-d701595508ad-builder-dockercfg-hvzlm-pull\") pod \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " Jan 22 12:10:10 crc kubenswrapper[5120]: I0122 12:10:10.922287 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2sgm\" (UniqueName: \"kubernetes.io/projected/4f1f5ecd-00ad-4747-b1eb-d701595508ad-kube-api-access-b2sgm\") pod \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " Jan 22 12:10:10 crc kubenswrapper[5120]: I0122 12:10:10.922433 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/4f1f5ecd-00ad-4747-b1eb-d701595508ad-build-blob-cache\") pod \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " Jan 22 12:10:10 crc kubenswrapper[5120]: I0122 12:10:10.922552 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4f1f5ecd-00ad-4747-b1eb-d701595508ad-build-proxy-ca-bundles\") pod \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " Jan 22 12:10:10 crc kubenswrapper[5120]: I0122 12:10:10.922618 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/4f1f5ecd-00ad-4747-b1eb-d701595508ad-buildcachedir\") pod \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " Jan 22 12:10:10 crc kubenswrapper[5120]: I0122 12:10:10.922685 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/4f1f5ecd-00ad-4747-b1eb-d701595508ad-container-storage-run\") pod \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " Jan 22 12:10:10 crc kubenswrapper[5120]: I0122 12:10:10.923556 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f1f5ecd-00ad-4747-b1eb-d701595508ad-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "4f1f5ecd-00ad-4747-b1eb-d701595508ad" (UID: "4f1f5ecd-00ad-4747-b1eb-d701595508ad"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 12:10:10 crc kubenswrapper[5120]: I0122 12:10:10.923689 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4f1f5ecd-00ad-4747-b1eb-d701595508ad-node-pullsecrets\") pod \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " Jan 22 12:10:10 crc kubenswrapper[5120]: I0122 12:10:10.924085 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f1f5ecd-00ad-4747-b1eb-d701595508ad-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "4f1f5ecd-00ad-4747-b1eb-d701595508ad" (UID: "4f1f5ecd-00ad-4747-b1eb-d701595508ad"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:10:10 crc kubenswrapper[5120]: I0122 12:10:10.924085 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f1f5ecd-00ad-4747-b1eb-d701595508ad-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "4f1f5ecd-00ad-4747-b1eb-d701595508ad" (UID: "4f1f5ecd-00ad-4747-b1eb-d701595508ad"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:10:10 crc kubenswrapper[5120]: I0122 12:10:10.924158 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f1f5ecd-00ad-4747-b1eb-d701595508ad-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "4f1f5ecd-00ad-4747-b1eb-d701595508ad" (UID: "4f1f5ecd-00ad-4747-b1eb-d701595508ad"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 12:10:10 crc kubenswrapper[5120]: I0122 12:10:10.924391 5120 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4f1f5ecd-00ad-4747-b1eb-d701595508ad-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 12:10:10 crc kubenswrapper[5120]: I0122 12:10:10.924412 5120 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/4f1f5ecd-00ad-4747-b1eb-d701595508ad-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 22 12:10:10 crc kubenswrapper[5120]: I0122 12:10:10.924420 5120 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4f1f5ecd-00ad-4747-b1eb-d701595508ad-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 22 12:10:10 crc kubenswrapper[5120]: I0122 12:10:10.924429 5120 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/4f1f5ecd-00ad-4747-b1eb-d701595508ad-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 22 12:10:10 crc kubenswrapper[5120]: I0122 12:10:10.924592 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f1f5ecd-00ad-4747-b1eb-d701595508ad-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "4f1f5ecd-00ad-4747-b1eb-d701595508ad" (UID: "4f1f5ecd-00ad-4747-b1eb-d701595508ad"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:10:10 crc kubenswrapper[5120]: I0122 12:10:10.925043 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f1f5ecd-00ad-4747-b1eb-d701595508ad-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "4f1f5ecd-00ad-4747-b1eb-d701595508ad" (UID: "4f1f5ecd-00ad-4747-b1eb-d701595508ad"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:10:10 crc kubenswrapper[5120]: I0122 12:10:10.929723 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f1f5ecd-00ad-4747-b1eb-d701595508ad-builder-dockercfg-hvzlm-pull" (OuterVolumeSpecName: "builder-dockercfg-hvzlm-pull") pod "4f1f5ecd-00ad-4747-b1eb-d701595508ad" (UID: "4f1f5ecd-00ad-4747-b1eb-d701595508ad"). InnerVolumeSpecName "builder-dockercfg-hvzlm-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 12:10:10 crc kubenswrapper[5120]: I0122 12:10:10.929757 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f1f5ecd-00ad-4747-b1eb-d701595508ad-builder-dockercfg-hvzlm-push" (OuterVolumeSpecName: "builder-dockercfg-hvzlm-push") pod "4f1f5ecd-00ad-4747-b1eb-d701595508ad" (UID: "4f1f5ecd-00ad-4747-b1eb-d701595508ad"). InnerVolumeSpecName "builder-dockercfg-hvzlm-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 12:10:10 crc kubenswrapper[5120]: I0122 12:10:10.931475 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f1f5ecd-00ad-4747-b1eb-d701595508ad-kube-api-access-b2sgm" (OuterVolumeSpecName: "kube-api-access-b2sgm") pod "4f1f5ecd-00ad-4747-b1eb-d701595508ad" (UID: "4f1f5ecd-00ad-4747-b1eb-d701595508ad"). InnerVolumeSpecName "kube-api-access-b2sgm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:10:10 crc kubenswrapper[5120]: I0122 12:10:10.936274 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f1f5ecd-00ad-4747-b1eb-d701595508ad-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "4f1f5ecd-00ad-4747-b1eb-d701595508ad" (UID: "4f1f5ecd-00ad-4747-b1eb-d701595508ad"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:10:11 crc kubenswrapper[5120]: I0122 12:10:11.025837 5120 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4f1f5ecd-00ad-4747-b1eb-d701595508ad-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 12:10:11 crc kubenswrapper[5120]: I0122 12:10:11.025883 5120 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/4f1f5ecd-00ad-4747-b1eb-d701595508ad-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 22 12:10:11 crc kubenswrapper[5120]: I0122 12:10:11.025894 5120 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/4f1f5ecd-00ad-4747-b1eb-d701595508ad-builder-dockercfg-hvzlm-push\") on node \"crc\" DevicePath \"\"" Jan 22 12:10:11 crc kubenswrapper[5120]: I0122 12:10:11.025904 5120 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/4f1f5ecd-00ad-4747-b1eb-d701595508ad-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 22 12:10:11 crc kubenswrapper[5120]: I0122 12:10:11.025916 5120 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/4f1f5ecd-00ad-4747-b1eb-d701595508ad-builder-dockercfg-hvzlm-pull\") on node \"crc\" DevicePath \"\"" Jan 22 12:10:11 crc kubenswrapper[5120]: I0122 12:10:11.025925 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-b2sgm\" (UniqueName: \"kubernetes.io/projected/4f1f5ecd-00ad-4747-b1eb-d701595508ad-kube-api-access-b2sgm\") on node \"crc\" DevicePath \"\"" Jan 22 12:10:11 crc kubenswrapper[5120]: I0122 12:10:11.290660 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f1f5ecd-00ad-4747-b1eb-d701595508ad-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "4f1f5ecd-00ad-4747-b1eb-d701595508ad" (UID: "4f1f5ecd-00ad-4747-b1eb-d701595508ad"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:10:11 crc kubenswrapper[5120]: I0122 12:10:11.330394 5120 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/4f1f5ecd-00ad-4747-b1eb-d701595508ad-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 22 12:10:11 crc kubenswrapper[5120]: I0122 12:10:11.565309 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"4f1f5ecd-00ad-4747-b1eb-d701595508ad","Type":"ContainerDied","Data":"eb5c6f13316b5753739a96c02f22620ad9a7455959acbd68aed8bb15ee7d4bbd"} Jan 22 12:10:11 crc kubenswrapper[5120]: I0122 12:10:11.565350 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb5c6f13316b5753739a96c02f22620ad9a7455959acbd68aed8bb15ee7d4bbd" Jan 22 12:10:11 crc kubenswrapper[5120]: I0122 12:10:11.565326 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Jan 22 12:10:13 crc kubenswrapper[5120]: I0122 12:10:13.383364 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f1f5ecd-00ad-4747-b1eb-d701595508ad-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "4f1f5ecd-00ad-4747-b1eb-d701595508ad" (UID: "4f1f5ecd-00ad-4747-b1eb-d701595508ad"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:10:13 crc kubenswrapper[5120]: I0122 12:10:13.461917 5120 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/4f1f5ecd-00ad-4747-b1eb-d701595508ad-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 22 12:10:16 crc kubenswrapper[5120]: I0122 12:10:16.857598 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 22 12:10:16 crc kubenswrapper[5120]: I0122 12:10:16.859411 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4f1f5ecd-00ad-4747-b1eb-d701595508ad" containerName="docker-build" Jan 22 12:10:16 crc kubenswrapper[5120]: I0122 12:10:16.859440 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f1f5ecd-00ad-4747-b1eb-d701595508ad" containerName="docker-build" Jan 22 12:10:16 crc kubenswrapper[5120]: I0122 12:10:16.859482 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4f1f5ecd-00ad-4747-b1eb-d701595508ad" containerName="manage-dockerfile" Jan 22 12:10:16 crc kubenswrapper[5120]: I0122 12:10:16.859497 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f1f5ecd-00ad-4747-b1eb-d701595508ad" containerName="manage-dockerfile" Jan 22 12:10:16 crc kubenswrapper[5120]: I0122 12:10:16.859512 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="86fa02fb-d5af-46f8-b19a-9af5fd7e5353" containerName="oc" Jan 22 12:10:16 crc kubenswrapper[5120]: I0122 12:10:16.859525 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="86fa02fb-d5af-46f8-b19a-9af5fd7e5353" containerName="oc" Jan 22 12:10:16 crc kubenswrapper[5120]: I0122 12:10:16.859548 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4f1f5ecd-00ad-4747-b1eb-d701595508ad" containerName="git-clone" Jan 22 12:10:16 crc kubenswrapper[5120]: I0122 12:10:16.859559 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f1f5ecd-00ad-4747-b1eb-d701595508ad" containerName="git-clone" Jan 22 12:10:16 crc kubenswrapper[5120]: I0122 12:10:16.859733 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="86fa02fb-d5af-46f8-b19a-9af5fd7e5353" containerName="oc" Jan 22 12:10:16 crc kubenswrapper[5120]: I0122 12:10:16.859760 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="4f1f5ecd-00ad-4747-b1eb-d701595508ad" containerName="docker-build" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.138018 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.138296 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.142108 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-bridge-1-sys-config\"" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.142474 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-bridge-1-ca\"" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.142989 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-hvzlm\"" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.143031 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-bridge-1-global-ca\"" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.219238 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.219287 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.219315 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.219335 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.219470 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-builder-dockercfg-hvzlm-pull\") pod \"sg-bridge-1-build\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.219554 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.219583 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-builder-dockercfg-hvzlm-push\") pod \"sg-bridge-1-build\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.219792 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.219857 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtlq8\" (UniqueName: \"kubernetes.io/projected/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-kube-api-access-rtlq8\") pod \"sg-bridge-1-build\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.219924 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.220048 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.220091 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.322192 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.322327 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.322379 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.322434 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-builder-dockercfg-hvzlm-pull\") pod \"sg-bridge-1-build\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.322503 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.322552 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-builder-dockercfg-hvzlm-push\") pod \"sg-bridge-1-build\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.322669 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.322730 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rtlq8\" (UniqueName: \"kubernetes.io/projected/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-kube-api-access-rtlq8\") pod \"sg-bridge-1-build\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.322802 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.322867 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.322925 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.322944 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.323053 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.323354 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.324080 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.324617 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.324679 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.324882 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.324893 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.325216 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.325347 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.330473 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-builder-dockercfg-hvzlm-pull\") pod \"sg-bridge-1-build\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.330474 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-builder-dockercfg-hvzlm-push\") pod \"sg-bridge-1-build\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.346620 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtlq8\" (UniqueName: \"kubernetes.io/projected/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-kube-api-access-rtlq8\") pod \"sg-bridge-1-build\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.465938 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.660383 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 22 12:10:18 crc kubenswrapper[5120]: I0122 12:10:18.613090 5120 generic.go:358] "Generic (PLEG): container finished" podID="c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1" containerID="fce5fcfa24b61113e059dbc8a86e3eae595fe68ab0c46f4e59c275faea435189" exitCode=0 Jan 22 12:10:18 crc kubenswrapper[5120]: I0122 12:10:18.613169 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1","Type":"ContainerDied","Data":"fce5fcfa24b61113e059dbc8a86e3eae595fe68ab0c46f4e59c275faea435189"} Jan 22 12:10:18 crc kubenswrapper[5120]: I0122 12:10:18.613902 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1","Type":"ContainerStarted","Data":"9264394407fc53c3467d3c267a00f3c26c21801d6472430023b8b496c2178810"} Jan 22 12:10:19 crc kubenswrapper[5120]: I0122 12:10:19.628655 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1","Type":"ContainerStarted","Data":"57e999484b6cb1f425a09d886e11c9aeca5c6f9d5ed91cc920bac2c4a290adce"} Jan 22 12:10:19 crc kubenswrapper[5120]: I0122 12:10:19.655844 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-bridge-1-build" podStartSLOduration=3.655825665 podStartE2EDuration="3.655825665s" podCreationTimestamp="2026-01-22 12:10:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 12:10:19.654165125 +0000 UTC m=+1354.398113466" watchObservedRunningTime="2026-01-22 12:10:19.655825665 +0000 UTC m=+1354.399774006" Jan 22 12:10:27 crc kubenswrapper[5120]: I0122 12:10:27.498514 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 22 12:10:27 crc kubenswrapper[5120]: I0122 12:10:27.499463 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/sg-bridge-1-build" podUID="c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1" containerName="docker-build" containerID="cri-o://57e999484b6cb1f425a09d886e11c9aeca5c6f9d5ed91cc920bac2c4a290adce" gracePeriod=30 Jan 22 12:10:27 crc kubenswrapper[5120]: I0122 12:10:27.690906 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-1-build_c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1/docker-build/0.log" Jan 22 12:10:27 crc kubenswrapper[5120]: I0122 12:10:27.691834 5120 generic.go:358] "Generic (PLEG): container finished" podID="c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1" containerID="57e999484b6cb1f425a09d886e11c9aeca5c6f9d5ed91cc920bac2c4a290adce" exitCode=1 Jan 22 12:10:27 crc kubenswrapper[5120]: I0122 12:10:27.692007 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1","Type":"ContainerDied","Data":"57e999484b6cb1f425a09d886e11c9aeca5c6f9d5ed91cc920bac2c4a290adce"} Jan 22 12:10:27 crc kubenswrapper[5120]: I0122 12:10:27.934676 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-1-build_c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1/docker-build/0.log" Jan 22 12:10:27 crc kubenswrapper[5120]: I0122 12:10:27.936061 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.094477 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-build-blob-cache\") pod \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.094534 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-build-proxy-ca-bundles\") pod \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.094567 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-build-system-configs\") pod \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.094652 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-container-storage-run\") pod \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.094714 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-builder-dockercfg-hvzlm-push\") pod \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.094736 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-buildcachedir\") pod \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.094814 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-buildworkdir\") pod \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.094874 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-builder-dockercfg-hvzlm-pull\") pod \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.094913 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtlq8\" (UniqueName: \"kubernetes.io/projected/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-kube-api-access-rtlq8\") pod \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.095030 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-node-pullsecrets\") pod \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.095095 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-container-storage-root\") pod \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.095147 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-build-ca-bundles\") pod \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.095910 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1" (UID: "c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.096981 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1" (UID: "c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.097206 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1" (UID: "c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.097784 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1" (UID: "c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.098002 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1" (UID: "c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.098092 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1" (UID: "c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.098361 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1" (UID: "c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.106817 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-builder-dockercfg-hvzlm-pull" (OuterVolumeSpecName: "builder-dockercfg-hvzlm-pull") pod "c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1" (UID: "c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1"). InnerVolumeSpecName "builder-dockercfg-hvzlm-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.106901 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-kube-api-access-rtlq8" (OuterVolumeSpecName: "kube-api-access-rtlq8") pod "c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1" (UID: "c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1"). InnerVolumeSpecName "kube-api-access-rtlq8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.107178 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-builder-dockercfg-hvzlm-push" (OuterVolumeSpecName: "builder-dockercfg-hvzlm-push") pod "c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1" (UID: "c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1"). InnerVolumeSpecName "builder-dockercfg-hvzlm-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.163044 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1" (UID: "c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.197487 5120 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-builder-dockercfg-hvzlm-pull\") on node \"crc\" DevicePath \"\"" Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.197564 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rtlq8\" (UniqueName: \"kubernetes.io/projected/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-kube-api-access-rtlq8\") on node \"crc\" DevicePath \"\"" Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.197584 5120 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.197601 5120 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.197620 5120 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.197636 5120 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.197654 5120 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.197671 5120 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.197690 5120 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-builder-dockercfg-hvzlm-push\") on node \"crc\" DevicePath \"\"" Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.197707 5120 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.197728 5120 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.260262 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1" (UID: "c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.299575 5120 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.705667 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-1-build_c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1/docker-build/0.log" Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.707686 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.707727 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1","Type":"ContainerDied","Data":"9264394407fc53c3467d3c267a00f3c26c21801d6472430023b8b496c2178810"} Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.707871 5120 scope.go:117] "RemoveContainer" containerID="57e999484b6cb1f425a09d886e11c9aeca5c6f9d5ed91cc920bac2c4a290adce" Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.762639 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.773669 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.777622 5120 scope.go:117] "RemoveContainer" containerID="fce5fcfa24b61113e059dbc8a86e3eae595fe68ab0c46f4e59c275faea435189" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.058648 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-bridge-2-build"] Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.059825 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1" containerName="docker-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.059859 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1" containerName="docker-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.059876 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1" containerName="manage-dockerfile" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.059885 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1" containerName="manage-dockerfile" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.060135 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1" containerName="docker-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.085735 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-2-build"] Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.085975 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.089181 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-bridge-2-sys-config\"" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.089626 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-bridge-2-ca\"" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.089807 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-hvzlm\"" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.090006 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-bridge-2-global-ca\"" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.212309 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68kdw\" (UniqueName: \"kubernetes.io/projected/76125ec9-7200-4d9a-8632-4f6a653c434c-kube-api-access-68kdw\") pod \"sg-bridge-2-build\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.212369 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/76125ec9-7200-4d9a-8632-4f6a653c434c-builder-dockercfg-hvzlm-pull\") pod \"sg-bridge-2-build\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.212400 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/76125ec9-7200-4d9a-8632-4f6a653c434c-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.212434 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/76125ec9-7200-4d9a-8632-4f6a653c434c-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.212457 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/76125ec9-7200-4d9a-8632-4f6a653c434c-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.212477 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/76125ec9-7200-4d9a-8632-4f6a653c434c-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.212496 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/76125ec9-7200-4d9a-8632-4f6a653c434c-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.212517 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/76125ec9-7200-4d9a-8632-4f6a653c434c-builder-dockercfg-hvzlm-push\") pod \"sg-bridge-2-build\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.212635 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/76125ec9-7200-4d9a-8632-4f6a653c434c-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.212897 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/76125ec9-7200-4d9a-8632-4f6a653c434c-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.212997 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/76125ec9-7200-4d9a-8632-4f6a653c434c-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.213037 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/76125ec9-7200-4d9a-8632-4f6a653c434c-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.314517 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/76125ec9-7200-4d9a-8632-4f6a653c434c-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.314614 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/76125ec9-7200-4d9a-8632-4f6a653c434c-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.314648 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/76125ec9-7200-4d9a-8632-4f6a653c434c-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.314680 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/76125ec9-7200-4d9a-8632-4f6a653c434c-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.314746 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-68kdw\" (UniqueName: \"kubernetes.io/projected/76125ec9-7200-4d9a-8632-4f6a653c434c-kube-api-access-68kdw\") pod \"sg-bridge-2-build\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.314932 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/76125ec9-7200-4d9a-8632-4f6a653c434c-builder-dockercfg-hvzlm-pull\") pod \"sg-bridge-2-build\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.315033 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/76125ec9-7200-4d9a-8632-4f6a653c434c-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.315089 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/76125ec9-7200-4d9a-8632-4f6a653c434c-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.315115 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/76125ec9-7200-4d9a-8632-4f6a653c434c-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.315141 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/76125ec9-7200-4d9a-8632-4f6a653c434c-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.315292 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/76125ec9-7200-4d9a-8632-4f6a653c434c-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.315298 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/76125ec9-7200-4d9a-8632-4f6a653c434c-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.315333 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/76125ec9-7200-4d9a-8632-4f6a653c434c-builder-dockercfg-hvzlm-push\") pod \"sg-bridge-2-build\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.315644 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/76125ec9-7200-4d9a-8632-4f6a653c434c-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.315701 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/76125ec9-7200-4d9a-8632-4f6a653c434c-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.315779 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/76125ec9-7200-4d9a-8632-4f6a653c434c-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.315733 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/76125ec9-7200-4d9a-8632-4f6a653c434c-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.316017 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/76125ec9-7200-4d9a-8632-4f6a653c434c-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.316054 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/76125ec9-7200-4d9a-8632-4f6a653c434c-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.316155 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/76125ec9-7200-4d9a-8632-4f6a653c434c-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.316356 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/76125ec9-7200-4d9a-8632-4f6a653c434c-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.323752 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/76125ec9-7200-4d9a-8632-4f6a653c434c-builder-dockercfg-hvzlm-push\") pod \"sg-bridge-2-build\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.328062 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/76125ec9-7200-4d9a-8632-4f6a653c434c-builder-dockercfg-hvzlm-pull\") pod \"sg-bridge-2-build\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.333992 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-68kdw\" (UniqueName: \"kubernetes.io/projected/76125ec9-7200-4d9a-8632-4f6a653c434c-kube-api-access-68kdw\") pod \"sg-bridge-2-build\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.411862 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.582340 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1" path="/var/lib/kubelet/pods/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1/volumes" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.861386 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-2-build"] Jan 22 12:10:30 crc kubenswrapper[5120]: I0122 12:10:30.612639 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-z8lnh"] Jan 22 12:10:30 crc kubenswrapper[5120]: I0122 12:10:30.624787 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z8lnh" Jan 22 12:10:30 crc kubenswrapper[5120]: I0122 12:10:30.626900 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z8lnh"] Jan 22 12:10:30 crc kubenswrapper[5120]: I0122 12:10:30.731714 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"76125ec9-7200-4d9a-8632-4f6a653c434c","Type":"ContainerStarted","Data":"1463c9e3d32959ee2e8e1d727c895c558456624cebafde2c110e96ea8ba9f4fd"} Jan 22 12:10:30 crc kubenswrapper[5120]: I0122 12:10:30.731781 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"76125ec9-7200-4d9a-8632-4f6a653c434c","Type":"ContainerStarted","Data":"a90092222f318a0a87bcff1fc50be1c6c98f3209f37eda836b41e5226bcff2b0"} Jan 22 12:10:30 crc kubenswrapper[5120]: I0122 12:10:30.775491 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b8619b7-91c0-4e9a-a414-e678f914250c-utilities\") pod \"redhat-operators-z8lnh\" (UID: \"6b8619b7-91c0-4e9a-a414-e678f914250c\") " pod="openshift-marketplace/redhat-operators-z8lnh" Jan 22 12:10:30 crc kubenswrapper[5120]: I0122 12:10:30.775599 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7fhp\" (UniqueName: \"kubernetes.io/projected/6b8619b7-91c0-4e9a-a414-e678f914250c-kube-api-access-q7fhp\") pod \"redhat-operators-z8lnh\" (UID: \"6b8619b7-91c0-4e9a-a414-e678f914250c\") " pod="openshift-marketplace/redhat-operators-z8lnh" Jan 22 12:10:30 crc kubenswrapper[5120]: I0122 12:10:30.775806 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b8619b7-91c0-4e9a-a414-e678f914250c-catalog-content\") pod \"redhat-operators-z8lnh\" (UID: \"6b8619b7-91c0-4e9a-a414-e678f914250c\") " pod="openshift-marketplace/redhat-operators-z8lnh" Jan 22 12:10:30 crc kubenswrapper[5120]: I0122 12:10:30.876787 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b8619b7-91c0-4e9a-a414-e678f914250c-utilities\") pod \"redhat-operators-z8lnh\" (UID: \"6b8619b7-91c0-4e9a-a414-e678f914250c\") " pod="openshift-marketplace/redhat-operators-z8lnh" Jan 22 12:10:30 crc kubenswrapper[5120]: I0122 12:10:30.876846 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-q7fhp\" (UniqueName: \"kubernetes.io/projected/6b8619b7-91c0-4e9a-a414-e678f914250c-kube-api-access-q7fhp\") pod \"redhat-operators-z8lnh\" (UID: \"6b8619b7-91c0-4e9a-a414-e678f914250c\") " pod="openshift-marketplace/redhat-operators-z8lnh" Jan 22 12:10:30 crc kubenswrapper[5120]: I0122 12:10:30.876891 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b8619b7-91c0-4e9a-a414-e678f914250c-catalog-content\") pod \"redhat-operators-z8lnh\" (UID: \"6b8619b7-91c0-4e9a-a414-e678f914250c\") " pod="openshift-marketplace/redhat-operators-z8lnh" Jan 22 12:10:30 crc kubenswrapper[5120]: I0122 12:10:30.877434 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b8619b7-91c0-4e9a-a414-e678f914250c-catalog-content\") pod \"redhat-operators-z8lnh\" (UID: \"6b8619b7-91c0-4e9a-a414-e678f914250c\") " pod="openshift-marketplace/redhat-operators-z8lnh" Jan 22 12:10:30 crc kubenswrapper[5120]: I0122 12:10:30.877516 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b8619b7-91c0-4e9a-a414-e678f914250c-utilities\") pod \"redhat-operators-z8lnh\" (UID: \"6b8619b7-91c0-4e9a-a414-e678f914250c\") " pod="openshift-marketplace/redhat-operators-z8lnh" Jan 22 12:10:30 crc kubenswrapper[5120]: I0122 12:10:30.901946 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7fhp\" (UniqueName: \"kubernetes.io/projected/6b8619b7-91c0-4e9a-a414-e678f914250c-kube-api-access-q7fhp\") pod \"redhat-operators-z8lnh\" (UID: \"6b8619b7-91c0-4e9a-a414-e678f914250c\") " pod="openshift-marketplace/redhat-operators-z8lnh" Jan 22 12:10:30 crc kubenswrapper[5120]: I0122 12:10:30.976124 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z8lnh" Jan 22 12:10:31 crc kubenswrapper[5120]: I0122 12:10:31.210434 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z8lnh"] Jan 22 12:10:31 crc kubenswrapper[5120]: W0122 12:10:31.217719 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b8619b7_91c0_4e9a_a414_e678f914250c.slice/crio-4ce6e4d6f2d3d3291d1a8ba40c47213d89d0fd5a4214ae9bb9ffaace4e963797 WatchSource:0}: Error finding container 4ce6e4d6f2d3d3291d1a8ba40c47213d89d0fd5a4214ae9bb9ffaace4e963797: Status 404 returned error can't find the container with id 4ce6e4d6f2d3d3291d1a8ba40c47213d89d0fd5a4214ae9bb9ffaace4e963797 Jan 22 12:10:31 crc kubenswrapper[5120]: I0122 12:10:31.742639 5120 generic.go:358] "Generic (PLEG): container finished" podID="76125ec9-7200-4d9a-8632-4f6a653c434c" containerID="1463c9e3d32959ee2e8e1d727c895c558456624cebafde2c110e96ea8ba9f4fd" exitCode=0 Jan 22 12:10:31 crc kubenswrapper[5120]: I0122 12:10:31.742744 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"76125ec9-7200-4d9a-8632-4f6a653c434c","Type":"ContainerDied","Data":"1463c9e3d32959ee2e8e1d727c895c558456624cebafde2c110e96ea8ba9f4fd"} Jan 22 12:10:31 crc kubenswrapper[5120]: I0122 12:10:31.745872 5120 generic.go:358] "Generic (PLEG): container finished" podID="6b8619b7-91c0-4e9a-a414-e678f914250c" containerID="77b4241a97ef24270910ab0a023bf14cbe1e30799af840842fa0cccae0d72636" exitCode=0 Jan 22 12:10:31 crc kubenswrapper[5120]: I0122 12:10:31.746012 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z8lnh" event={"ID":"6b8619b7-91c0-4e9a-a414-e678f914250c","Type":"ContainerDied","Data":"77b4241a97ef24270910ab0a023bf14cbe1e30799af840842fa0cccae0d72636"} Jan 22 12:10:31 crc kubenswrapper[5120]: I0122 12:10:31.746086 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z8lnh" event={"ID":"6b8619b7-91c0-4e9a-a414-e678f914250c","Type":"ContainerStarted","Data":"4ce6e4d6f2d3d3291d1a8ba40c47213d89d0fd5a4214ae9bb9ffaace4e963797"} Jan 22 12:10:32 crc kubenswrapper[5120]: I0122 12:10:32.409033 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kkhnz"] Jan 22 12:10:32 crc kubenswrapper[5120]: I0122 12:10:32.413645 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kkhnz" Jan 22 12:10:32 crc kubenswrapper[5120]: I0122 12:10:32.430215 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kkhnz"] Jan 22 12:10:32 crc kubenswrapper[5120]: I0122 12:10:32.499868 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dab9f1c-1f91-40c9-a40d-06e7e8573d49-catalog-content\") pod \"certified-operators-kkhnz\" (UID: \"5dab9f1c-1f91-40c9-a40d-06e7e8573d49\") " pod="openshift-marketplace/certified-operators-kkhnz" Jan 22 12:10:32 crc kubenswrapper[5120]: I0122 12:10:32.499933 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6j2j\" (UniqueName: \"kubernetes.io/projected/5dab9f1c-1f91-40c9-a40d-06e7e8573d49-kube-api-access-k6j2j\") pod \"certified-operators-kkhnz\" (UID: \"5dab9f1c-1f91-40c9-a40d-06e7e8573d49\") " pod="openshift-marketplace/certified-operators-kkhnz" Jan 22 12:10:32 crc kubenswrapper[5120]: I0122 12:10:32.500012 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dab9f1c-1f91-40c9-a40d-06e7e8573d49-utilities\") pod \"certified-operators-kkhnz\" (UID: \"5dab9f1c-1f91-40c9-a40d-06e7e8573d49\") " pod="openshift-marketplace/certified-operators-kkhnz" Jan 22 12:10:32 crc kubenswrapper[5120]: I0122 12:10:32.602595 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dab9f1c-1f91-40c9-a40d-06e7e8573d49-utilities\") pod \"certified-operators-kkhnz\" (UID: \"5dab9f1c-1f91-40c9-a40d-06e7e8573d49\") " pod="openshift-marketplace/certified-operators-kkhnz" Jan 22 12:10:32 crc kubenswrapper[5120]: I0122 12:10:32.602760 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dab9f1c-1f91-40c9-a40d-06e7e8573d49-catalog-content\") pod \"certified-operators-kkhnz\" (UID: \"5dab9f1c-1f91-40c9-a40d-06e7e8573d49\") " pod="openshift-marketplace/certified-operators-kkhnz" Jan 22 12:10:32 crc kubenswrapper[5120]: I0122 12:10:32.602831 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-k6j2j\" (UniqueName: \"kubernetes.io/projected/5dab9f1c-1f91-40c9-a40d-06e7e8573d49-kube-api-access-k6j2j\") pod \"certified-operators-kkhnz\" (UID: \"5dab9f1c-1f91-40c9-a40d-06e7e8573d49\") " pod="openshift-marketplace/certified-operators-kkhnz" Jan 22 12:10:32 crc kubenswrapper[5120]: I0122 12:10:32.604652 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dab9f1c-1f91-40c9-a40d-06e7e8573d49-utilities\") pod \"certified-operators-kkhnz\" (UID: \"5dab9f1c-1f91-40c9-a40d-06e7e8573d49\") " pod="openshift-marketplace/certified-operators-kkhnz" Jan 22 12:10:32 crc kubenswrapper[5120]: I0122 12:10:32.605068 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dab9f1c-1f91-40c9-a40d-06e7e8573d49-catalog-content\") pod \"certified-operators-kkhnz\" (UID: \"5dab9f1c-1f91-40c9-a40d-06e7e8573d49\") " pod="openshift-marketplace/certified-operators-kkhnz" Jan 22 12:10:32 crc kubenswrapper[5120]: I0122 12:10:32.716764 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6j2j\" (UniqueName: \"kubernetes.io/projected/5dab9f1c-1f91-40c9-a40d-06e7e8573d49-kube-api-access-k6j2j\") pod \"certified-operators-kkhnz\" (UID: \"5dab9f1c-1f91-40c9-a40d-06e7e8573d49\") " pod="openshift-marketplace/certified-operators-kkhnz" Jan 22 12:10:32 crc kubenswrapper[5120]: I0122 12:10:32.733066 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kkhnz" Jan 22 12:10:32 crc kubenswrapper[5120]: I0122 12:10:32.771818 5120 generic.go:358] "Generic (PLEG): container finished" podID="76125ec9-7200-4d9a-8632-4f6a653c434c" containerID="6e84abcfc92c46cd9c05ade077fb1a9e87b366a03f1ae7450820d1f8b8b9c951" exitCode=0 Jan 22 12:10:32 crc kubenswrapper[5120]: I0122 12:10:32.772046 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"76125ec9-7200-4d9a-8632-4f6a653c434c","Type":"ContainerDied","Data":"6e84abcfc92c46cd9c05ade077fb1a9e87b366a03f1ae7450820d1f8b8b9c951"} Jan 22 12:10:32 crc kubenswrapper[5120]: I0122 12:10:32.819947 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-2-build_76125ec9-7200-4d9a-8632-4f6a653c434c/manage-dockerfile/0.log" Jan 22 12:10:33 crc kubenswrapper[5120]: I0122 12:10:33.013728 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kkhnz"] Jan 22 12:10:33 crc kubenswrapper[5120]: I0122 12:10:33.784374 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"76125ec9-7200-4d9a-8632-4f6a653c434c","Type":"ContainerStarted","Data":"a944fcb82f3c6723cb691dd08da97990bc675ba7df78e295bfd7678975a8901f"} Jan 22 12:10:33 crc kubenswrapper[5120]: I0122 12:10:33.787644 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z8lnh" event={"ID":"6b8619b7-91c0-4e9a-a414-e678f914250c","Type":"ContainerStarted","Data":"32f62d0c65734b64af94e4827830f897baf33245941f4b25bbe14b41edad7204"} Jan 22 12:10:33 crc kubenswrapper[5120]: I0122 12:10:33.789716 5120 generic.go:358] "Generic (PLEG): container finished" podID="5dab9f1c-1f91-40c9-a40d-06e7e8573d49" containerID="40a7ee616ac99fabf44657a14acf2735bea6cb79c20414651c409f5fe80651ab" exitCode=0 Jan 22 12:10:33 crc kubenswrapper[5120]: I0122 12:10:33.789805 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kkhnz" event={"ID":"5dab9f1c-1f91-40c9-a40d-06e7e8573d49","Type":"ContainerDied","Data":"40a7ee616ac99fabf44657a14acf2735bea6cb79c20414651c409f5fe80651ab"} Jan 22 12:10:33 crc kubenswrapper[5120]: I0122 12:10:33.789830 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kkhnz" event={"ID":"5dab9f1c-1f91-40c9-a40d-06e7e8573d49","Type":"ContainerStarted","Data":"8ec38ad3a9388cddc24e4a5a9f2b784e7e27aed1fbe43c2d56585e8290fcc036"} Jan 22 12:10:33 crc kubenswrapper[5120]: I0122 12:10:33.819681 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-bridge-2-build" podStartSLOduration=4.819647592 podStartE2EDuration="4.819647592s" podCreationTimestamp="2026-01-22 12:10:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 12:10:33.816102796 +0000 UTC m=+1368.560051137" watchObservedRunningTime="2026-01-22 12:10:33.819647592 +0000 UTC m=+1368.563595933" Jan 22 12:10:36 crc kubenswrapper[5120]: I0122 12:10:36.816022 5120 generic.go:358] "Generic (PLEG): container finished" podID="6b8619b7-91c0-4e9a-a414-e678f914250c" containerID="32f62d0c65734b64af94e4827830f897baf33245941f4b25bbe14b41edad7204" exitCode=0 Jan 22 12:10:36 crc kubenswrapper[5120]: I0122 12:10:36.816187 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z8lnh" event={"ID":"6b8619b7-91c0-4e9a-a414-e678f914250c","Type":"ContainerDied","Data":"32f62d0c65734b64af94e4827830f897baf33245941f4b25bbe14b41edad7204"} Jan 22 12:10:37 crc kubenswrapper[5120]: I0122 12:10:37.825641 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z8lnh" event={"ID":"6b8619b7-91c0-4e9a-a414-e678f914250c","Type":"ContainerStarted","Data":"891bb70a6859b729096a0949f1ada74f9a762a63f3bd5443cd4015da67b840e8"} Jan 22 12:10:37 crc kubenswrapper[5120]: I0122 12:10:37.827918 5120 generic.go:358] "Generic (PLEG): container finished" podID="5dab9f1c-1f91-40c9-a40d-06e7e8573d49" containerID="c50425756226163c21e659bfee62e0d70c6451a442ba076dc2bdce2146052e35" exitCode=0 Jan 22 12:10:37 crc kubenswrapper[5120]: I0122 12:10:37.827948 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kkhnz" event={"ID":"5dab9f1c-1f91-40c9-a40d-06e7e8573d49","Type":"ContainerDied","Data":"c50425756226163c21e659bfee62e0d70c6451a442ba076dc2bdce2146052e35"} Jan 22 12:10:37 crc kubenswrapper[5120]: I0122 12:10:37.848093 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-z8lnh" podStartSLOduration=7.00627784 podStartE2EDuration="7.848066124s" podCreationTimestamp="2026-01-22 12:10:30 +0000 UTC" firstStartedPulling="2026-01-22 12:10:31.74713191 +0000 UTC m=+1366.491080251" lastFinishedPulling="2026-01-22 12:10:32.588920194 +0000 UTC m=+1367.332868535" observedRunningTime="2026-01-22 12:10:37.845481842 +0000 UTC m=+1372.589430203" watchObservedRunningTime="2026-01-22 12:10:37.848066124 +0000 UTC m=+1372.592014465" Jan 22 12:10:38 crc kubenswrapper[5120]: I0122 12:10:38.840759 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kkhnz" event={"ID":"5dab9f1c-1f91-40c9-a40d-06e7e8573d49","Type":"ContainerStarted","Data":"b2ba86375a4bb7a0aa9a92677b16cc8c1a5c3ff5702266557f68a9a4b302eb03"} Jan 22 12:10:38 crc kubenswrapper[5120]: I0122 12:10:38.861354 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kkhnz" podStartSLOduration=3.815780525 podStartE2EDuration="6.86133017s" podCreationTimestamp="2026-01-22 12:10:32 +0000 UTC" firstStartedPulling="2026-01-22 12:10:33.790933386 +0000 UTC m=+1368.534881727" lastFinishedPulling="2026-01-22 12:10:36.836483031 +0000 UTC m=+1371.580431372" observedRunningTime="2026-01-22 12:10:38.86012425 +0000 UTC m=+1373.604072601" watchObservedRunningTime="2026-01-22 12:10:38.86133017 +0000 UTC m=+1373.605278511" Jan 22 12:10:40 crc kubenswrapper[5120]: I0122 12:10:40.976412 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-z8lnh" Jan 22 12:10:40 crc kubenswrapper[5120]: I0122 12:10:40.976852 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-z8lnh" Jan 22 12:10:42 crc kubenswrapper[5120]: I0122 12:10:42.028220 5120 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-z8lnh" podUID="6b8619b7-91c0-4e9a-a414-e678f914250c" containerName="registry-server" probeResult="failure" output=< Jan 22 12:10:42 crc kubenswrapper[5120]: timeout: failed to connect service ":50051" within 1s Jan 22 12:10:42 crc kubenswrapper[5120]: > Jan 22 12:10:42 crc kubenswrapper[5120]: I0122 12:10:42.733705 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-kkhnz" Jan 22 12:10:42 crc kubenswrapper[5120]: I0122 12:10:42.733796 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kkhnz" Jan 22 12:10:42 crc kubenswrapper[5120]: I0122 12:10:42.781817 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kkhnz" Jan 22 12:10:42 crc kubenswrapper[5120]: I0122 12:10:42.922459 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kkhnz" Jan 22 12:10:43 crc kubenswrapper[5120]: I0122 12:10:43.024758 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kkhnz"] Jan 22 12:10:44 crc kubenswrapper[5120]: I0122 12:10:44.897853 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kkhnz" podUID="5dab9f1c-1f91-40c9-a40d-06e7e8573d49" containerName="registry-server" containerID="cri-o://b2ba86375a4bb7a0aa9a92677b16cc8c1a5c3ff5702266557f68a9a4b302eb03" gracePeriod=2 Jan 22 12:10:45 crc kubenswrapper[5120]: I0122 12:10:45.876446 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kkhnz" Jan 22 12:10:45 crc kubenswrapper[5120]: I0122 12:10:45.914729 5120 generic.go:358] "Generic (PLEG): container finished" podID="5dab9f1c-1f91-40c9-a40d-06e7e8573d49" containerID="b2ba86375a4bb7a0aa9a92677b16cc8c1a5c3ff5702266557f68a9a4b302eb03" exitCode=0 Jan 22 12:10:45 crc kubenswrapper[5120]: I0122 12:10:45.914858 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kkhnz" event={"ID":"5dab9f1c-1f91-40c9-a40d-06e7e8573d49","Type":"ContainerDied","Data":"b2ba86375a4bb7a0aa9a92677b16cc8c1a5c3ff5702266557f68a9a4b302eb03"} Jan 22 12:10:45 crc kubenswrapper[5120]: I0122 12:10:45.914906 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kkhnz" event={"ID":"5dab9f1c-1f91-40c9-a40d-06e7e8573d49","Type":"ContainerDied","Data":"8ec38ad3a9388cddc24e4a5a9f2b784e7e27aed1fbe43c2d56585e8290fcc036"} Jan 22 12:10:45 crc kubenswrapper[5120]: I0122 12:10:45.914931 5120 scope.go:117] "RemoveContainer" containerID="b2ba86375a4bb7a0aa9a92677b16cc8c1a5c3ff5702266557f68a9a4b302eb03" Jan 22 12:10:45 crc kubenswrapper[5120]: I0122 12:10:45.915221 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kkhnz" Jan 22 12:10:45 crc kubenswrapper[5120]: I0122 12:10:45.937740 5120 scope.go:117] "RemoveContainer" containerID="c50425756226163c21e659bfee62e0d70c6451a442ba076dc2bdce2146052e35" Jan 22 12:10:45 crc kubenswrapper[5120]: I0122 12:10:45.957376 5120 scope.go:117] "RemoveContainer" containerID="40a7ee616ac99fabf44657a14acf2735bea6cb79c20414651c409f5fe80651ab" Jan 22 12:10:45 crc kubenswrapper[5120]: I0122 12:10:45.977805 5120 scope.go:117] "RemoveContainer" containerID="b2ba86375a4bb7a0aa9a92677b16cc8c1a5c3ff5702266557f68a9a4b302eb03" Jan 22 12:10:45 crc kubenswrapper[5120]: E0122 12:10:45.983777 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2ba86375a4bb7a0aa9a92677b16cc8c1a5c3ff5702266557f68a9a4b302eb03\": container with ID starting with b2ba86375a4bb7a0aa9a92677b16cc8c1a5c3ff5702266557f68a9a4b302eb03 not found: ID does not exist" containerID="b2ba86375a4bb7a0aa9a92677b16cc8c1a5c3ff5702266557f68a9a4b302eb03" Jan 22 12:10:45 crc kubenswrapper[5120]: I0122 12:10:45.983861 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2ba86375a4bb7a0aa9a92677b16cc8c1a5c3ff5702266557f68a9a4b302eb03"} err="failed to get container status \"b2ba86375a4bb7a0aa9a92677b16cc8c1a5c3ff5702266557f68a9a4b302eb03\": rpc error: code = NotFound desc = could not find container \"b2ba86375a4bb7a0aa9a92677b16cc8c1a5c3ff5702266557f68a9a4b302eb03\": container with ID starting with b2ba86375a4bb7a0aa9a92677b16cc8c1a5c3ff5702266557f68a9a4b302eb03 not found: ID does not exist" Jan 22 12:10:45 crc kubenswrapper[5120]: I0122 12:10:45.983914 5120 scope.go:117] "RemoveContainer" containerID="c50425756226163c21e659bfee62e0d70c6451a442ba076dc2bdce2146052e35" Jan 22 12:10:45 crc kubenswrapper[5120]: E0122 12:10:45.984490 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c50425756226163c21e659bfee62e0d70c6451a442ba076dc2bdce2146052e35\": container with ID starting with c50425756226163c21e659bfee62e0d70c6451a442ba076dc2bdce2146052e35 not found: ID does not exist" containerID="c50425756226163c21e659bfee62e0d70c6451a442ba076dc2bdce2146052e35" Jan 22 12:10:45 crc kubenswrapper[5120]: I0122 12:10:45.984552 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c50425756226163c21e659bfee62e0d70c6451a442ba076dc2bdce2146052e35"} err="failed to get container status \"c50425756226163c21e659bfee62e0d70c6451a442ba076dc2bdce2146052e35\": rpc error: code = NotFound desc = could not find container \"c50425756226163c21e659bfee62e0d70c6451a442ba076dc2bdce2146052e35\": container with ID starting with c50425756226163c21e659bfee62e0d70c6451a442ba076dc2bdce2146052e35 not found: ID does not exist" Jan 22 12:10:45 crc kubenswrapper[5120]: I0122 12:10:45.984595 5120 scope.go:117] "RemoveContainer" containerID="40a7ee616ac99fabf44657a14acf2735bea6cb79c20414651c409f5fe80651ab" Jan 22 12:10:45 crc kubenswrapper[5120]: E0122 12:10:45.984941 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40a7ee616ac99fabf44657a14acf2735bea6cb79c20414651c409f5fe80651ab\": container with ID starting with 40a7ee616ac99fabf44657a14acf2735bea6cb79c20414651c409f5fe80651ab not found: ID does not exist" containerID="40a7ee616ac99fabf44657a14acf2735bea6cb79c20414651c409f5fe80651ab" Jan 22 12:10:45 crc kubenswrapper[5120]: I0122 12:10:45.985017 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40a7ee616ac99fabf44657a14acf2735bea6cb79c20414651c409f5fe80651ab"} err="failed to get container status \"40a7ee616ac99fabf44657a14acf2735bea6cb79c20414651c409f5fe80651ab\": rpc error: code = NotFound desc = could not find container \"40a7ee616ac99fabf44657a14acf2735bea6cb79c20414651c409f5fe80651ab\": container with ID starting with 40a7ee616ac99fabf44657a14acf2735bea6cb79c20414651c409f5fe80651ab not found: ID does not exist" Jan 22 12:10:46 crc kubenswrapper[5120]: I0122 12:10:46.020833 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dab9f1c-1f91-40c9-a40d-06e7e8573d49-utilities\") pod \"5dab9f1c-1f91-40c9-a40d-06e7e8573d49\" (UID: \"5dab9f1c-1f91-40c9-a40d-06e7e8573d49\") " Jan 22 12:10:46 crc kubenswrapper[5120]: I0122 12:10:46.021127 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dab9f1c-1f91-40c9-a40d-06e7e8573d49-catalog-content\") pod \"5dab9f1c-1f91-40c9-a40d-06e7e8573d49\" (UID: \"5dab9f1c-1f91-40c9-a40d-06e7e8573d49\") " Jan 22 12:10:46 crc kubenswrapper[5120]: I0122 12:10:46.021178 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6j2j\" (UniqueName: \"kubernetes.io/projected/5dab9f1c-1f91-40c9-a40d-06e7e8573d49-kube-api-access-k6j2j\") pod \"5dab9f1c-1f91-40c9-a40d-06e7e8573d49\" (UID: \"5dab9f1c-1f91-40c9-a40d-06e7e8573d49\") " Jan 22 12:10:46 crc kubenswrapper[5120]: I0122 12:10:46.023095 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5dab9f1c-1f91-40c9-a40d-06e7e8573d49-utilities" (OuterVolumeSpecName: "utilities") pod "5dab9f1c-1f91-40c9-a40d-06e7e8573d49" (UID: "5dab9f1c-1f91-40c9-a40d-06e7e8573d49"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:10:46 crc kubenswrapper[5120]: I0122 12:10:46.038297 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5dab9f1c-1f91-40c9-a40d-06e7e8573d49-kube-api-access-k6j2j" (OuterVolumeSpecName: "kube-api-access-k6j2j") pod "5dab9f1c-1f91-40c9-a40d-06e7e8573d49" (UID: "5dab9f1c-1f91-40c9-a40d-06e7e8573d49"). InnerVolumeSpecName "kube-api-access-k6j2j". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:10:46 crc kubenswrapper[5120]: I0122 12:10:46.067035 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5dab9f1c-1f91-40c9-a40d-06e7e8573d49-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5dab9f1c-1f91-40c9-a40d-06e7e8573d49" (UID: "5dab9f1c-1f91-40c9-a40d-06e7e8573d49"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:10:46 crc kubenswrapper[5120]: I0122 12:10:46.123558 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dab9f1c-1f91-40c9-a40d-06e7e8573d49-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 12:10:46 crc kubenswrapper[5120]: I0122 12:10:46.124122 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dab9f1c-1f91-40c9-a40d-06e7e8573d49-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 12:10:46 crc kubenswrapper[5120]: I0122 12:10:46.124141 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-k6j2j\" (UniqueName: \"kubernetes.io/projected/5dab9f1c-1f91-40c9-a40d-06e7e8573d49-kube-api-access-k6j2j\") on node \"crc\" DevicePath \"\"" Jan 22 12:10:46 crc kubenswrapper[5120]: I0122 12:10:46.261571 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kkhnz"] Jan 22 12:10:46 crc kubenswrapper[5120]: I0122 12:10:46.267571 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kkhnz"] Jan 22 12:10:47 crc kubenswrapper[5120]: I0122 12:10:47.582460 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5dab9f1c-1f91-40c9-a40d-06e7e8573d49" path="/var/lib/kubelet/pods/5dab9f1c-1f91-40c9-a40d-06e7e8573d49/volumes" Jan 22 12:10:51 crc kubenswrapper[5120]: I0122 12:10:51.022499 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-z8lnh" Jan 22 12:10:51 crc kubenswrapper[5120]: I0122 12:10:51.083085 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-z8lnh" Jan 22 12:10:51 crc kubenswrapper[5120]: I0122 12:10:51.262120 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z8lnh"] Jan 22 12:10:52 crc kubenswrapper[5120]: I0122 12:10:52.973416 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-z8lnh" podUID="6b8619b7-91c0-4e9a-a414-e678f914250c" containerName="registry-server" containerID="cri-o://891bb70a6859b729096a0949f1ada74f9a762a63f3bd5443cd4015da67b840e8" gracePeriod=2 Jan 22 12:10:53 crc kubenswrapper[5120]: I0122 12:10:53.869158 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z8lnh" Jan 22 12:10:53 crc kubenswrapper[5120]: I0122 12:10:53.940934 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7fhp\" (UniqueName: \"kubernetes.io/projected/6b8619b7-91c0-4e9a-a414-e678f914250c-kube-api-access-q7fhp\") pod \"6b8619b7-91c0-4e9a-a414-e678f914250c\" (UID: \"6b8619b7-91c0-4e9a-a414-e678f914250c\") " Jan 22 12:10:53 crc kubenswrapper[5120]: I0122 12:10:53.941081 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b8619b7-91c0-4e9a-a414-e678f914250c-utilities\") pod \"6b8619b7-91c0-4e9a-a414-e678f914250c\" (UID: \"6b8619b7-91c0-4e9a-a414-e678f914250c\") " Jan 22 12:10:53 crc kubenswrapper[5120]: I0122 12:10:53.941115 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b8619b7-91c0-4e9a-a414-e678f914250c-catalog-content\") pod \"6b8619b7-91c0-4e9a-a414-e678f914250c\" (UID: \"6b8619b7-91c0-4e9a-a414-e678f914250c\") " Jan 22 12:10:53 crc kubenswrapper[5120]: I0122 12:10:53.942595 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b8619b7-91c0-4e9a-a414-e678f914250c-utilities" (OuterVolumeSpecName: "utilities") pod "6b8619b7-91c0-4e9a-a414-e678f914250c" (UID: "6b8619b7-91c0-4e9a-a414-e678f914250c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:10:53 crc kubenswrapper[5120]: I0122 12:10:53.949352 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b8619b7-91c0-4e9a-a414-e678f914250c-kube-api-access-q7fhp" (OuterVolumeSpecName: "kube-api-access-q7fhp") pod "6b8619b7-91c0-4e9a-a414-e678f914250c" (UID: "6b8619b7-91c0-4e9a-a414-e678f914250c"). InnerVolumeSpecName "kube-api-access-q7fhp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:10:53 crc kubenswrapper[5120]: I0122 12:10:53.983130 5120 generic.go:358] "Generic (PLEG): container finished" podID="6b8619b7-91c0-4e9a-a414-e678f914250c" containerID="891bb70a6859b729096a0949f1ada74f9a762a63f3bd5443cd4015da67b840e8" exitCode=0 Jan 22 12:10:53 crc kubenswrapper[5120]: I0122 12:10:53.983229 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z8lnh" event={"ID":"6b8619b7-91c0-4e9a-a414-e678f914250c","Type":"ContainerDied","Data":"891bb70a6859b729096a0949f1ada74f9a762a63f3bd5443cd4015da67b840e8"} Jan 22 12:10:53 crc kubenswrapper[5120]: I0122 12:10:53.983268 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z8lnh" Jan 22 12:10:53 crc kubenswrapper[5120]: I0122 12:10:53.983728 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z8lnh" event={"ID":"6b8619b7-91c0-4e9a-a414-e678f914250c","Type":"ContainerDied","Data":"4ce6e4d6f2d3d3291d1a8ba40c47213d89d0fd5a4214ae9bb9ffaace4e963797"} Jan 22 12:10:53 crc kubenswrapper[5120]: I0122 12:10:53.983766 5120 scope.go:117] "RemoveContainer" containerID="891bb70a6859b729096a0949f1ada74f9a762a63f3bd5443cd4015da67b840e8" Jan 22 12:10:54 crc kubenswrapper[5120]: I0122 12:10:54.014827 5120 scope.go:117] "RemoveContainer" containerID="32f62d0c65734b64af94e4827830f897baf33245941f4b25bbe14b41edad7204" Jan 22 12:10:54 crc kubenswrapper[5120]: I0122 12:10:54.037739 5120 scope.go:117] "RemoveContainer" containerID="77b4241a97ef24270910ab0a023bf14cbe1e30799af840842fa0cccae0d72636" Jan 22 12:10:54 crc kubenswrapper[5120]: I0122 12:10:54.043073 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b8619b7-91c0-4e9a-a414-e678f914250c-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 12:10:54 crc kubenswrapper[5120]: I0122 12:10:54.043103 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q7fhp\" (UniqueName: \"kubernetes.io/projected/6b8619b7-91c0-4e9a-a414-e678f914250c-kube-api-access-q7fhp\") on node \"crc\" DevicePath \"\"" Jan 22 12:10:54 crc kubenswrapper[5120]: I0122 12:10:54.057338 5120 scope.go:117] "RemoveContainer" containerID="891bb70a6859b729096a0949f1ada74f9a762a63f3bd5443cd4015da67b840e8" Jan 22 12:10:54 crc kubenswrapper[5120]: E0122 12:10:54.058174 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"891bb70a6859b729096a0949f1ada74f9a762a63f3bd5443cd4015da67b840e8\": container with ID starting with 891bb70a6859b729096a0949f1ada74f9a762a63f3bd5443cd4015da67b840e8 not found: ID does not exist" containerID="891bb70a6859b729096a0949f1ada74f9a762a63f3bd5443cd4015da67b840e8" Jan 22 12:10:54 crc kubenswrapper[5120]: I0122 12:10:54.058218 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"891bb70a6859b729096a0949f1ada74f9a762a63f3bd5443cd4015da67b840e8"} err="failed to get container status \"891bb70a6859b729096a0949f1ada74f9a762a63f3bd5443cd4015da67b840e8\": rpc error: code = NotFound desc = could not find container \"891bb70a6859b729096a0949f1ada74f9a762a63f3bd5443cd4015da67b840e8\": container with ID starting with 891bb70a6859b729096a0949f1ada74f9a762a63f3bd5443cd4015da67b840e8 not found: ID does not exist" Jan 22 12:10:54 crc kubenswrapper[5120]: I0122 12:10:54.058247 5120 scope.go:117] "RemoveContainer" containerID="32f62d0c65734b64af94e4827830f897baf33245941f4b25bbe14b41edad7204" Jan 22 12:10:54 crc kubenswrapper[5120]: E0122 12:10:54.058719 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32f62d0c65734b64af94e4827830f897baf33245941f4b25bbe14b41edad7204\": container with ID starting with 32f62d0c65734b64af94e4827830f897baf33245941f4b25bbe14b41edad7204 not found: ID does not exist" containerID="32f62d0c65734b64af94e4827830f897baf33245941f4b25bbe14b41edad7204" Jan 22 12:10:54 crc kubenswrapper[5120]: I0122 12:10:54.058803 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32f62d0c65734b64af94e4827830f897baf33245941f4b25bbe14b41edad7204"} err="failed to get container status \"32f62d0c65734b64af94e4827830f897baf33245941f4b25bbe14b41edad7204\": rpc error: code = NotFound desc = could not find container \"32f62d0c65734b64af94e4827830f897baf33245941f4b25bbe14b41edad7204\": container with ID starting with 32f62d0c65734b64af94e4827830f897baf33245941f4b25bbe14b41edad7204 not found: ID does not exist" Jan 22 12:10:54 crc kubenswrapper[5120]: I0122 12:10:54.058903 5120 scope.go:117] "RemoveContainer" containerID="77b4241a97ef24270910ab0a023bf14cbe1e30799af840842fa0cccae0d72636" Jan 22 12:10:54 crc kubenswrapper[5120]: E0122 12:10:54.059305 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77b4241a97ef24270910ab0a023bf14cbe1e30799af840842fa0cccae0d72636\": container with ID starting with 77b4241a97ef24270910ab0a023bf14cbe1e30799af840842fa0cccae0d72636 not found: ID does not exist" containerID="77b4241a97ef24270910ab0a023bf14cbe1e30799af840842fa0cccae0d72636" Jan 22 12:10:54 crc kubenswrapper[5120]: I0122 12:10:54.059343 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77b4241a97ef24270910ab0a023bf14cbe1e30799af840842fa0cccae0d72636"} err="failed to get container status \"77b4241a97ef24270910ab0a023bf14cbe1e30799af840842fa0cccae0d72636\": rpc error: code = NotFound desc = could not find container \"77b4241a97ef24270910ab0a023bf14cbe1e30799af840842fa0cccae0d72636\": container with ID starting with 77b4241a97ef24270910ab0a023bf14cbe1e30799af840842fa0cccae0d72636 not found: ID does not exist" Jan 22 12:10:54 crc kubenswrapper[5120]: I0122 12:10:54.110673 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b8619b7-91c0-4e9a-a414-e678f914250c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6b8619b7-91c0-4e9a-a414-e678f914250c" (UID: "6b8619b7-91c0-4e9a-a414-e678f914250c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:10:54 crc kubenswrapper[5120]: I0122 12:10:54.144294 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b8619b7-91c0-4e9a-a414-e678f914250c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 12:10:54 crc kubenswrapper[5120]: I0122 12:10:54.324488 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z8lnh"] Jan 22 12:10:54 crc kubenswrapper[5120]: I0122 12:10:54.332983 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-z8lnh"] Jan 22 12:10:55 crc kubenswrapper[5120]: I0122 12:10:55.047699 5120 scope.go:117] "RemoveContainer" containerID="ebc82e27b7ff9936fb8ab3baff996147f2e548280fc1e0007bc5efe24e9891e6" Jan 22 12:10:55 crc kubenswrapper[5120]: I0122 12:10:55.581369 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b8619b7-91c0-4e9a-a414-e678f914250c" path="/var/lib/kubelet/pods/6b8619b7-91c0-4e9a-a414-e678f914250c/volumes" Jan 22 12:11:34 crc kubenswrapper[5120]: I0122 12:11:34.341021 5120 generic.go:358] "Generic (PLEG): container finished" podID="76125ec9-7200-4d9a-8632-4f6a653c434c" containerID="a944fcb82f3c6723cb691dd08da97990bc675ba7df78e295bfd7678975a8901f" exitCode=0 Jan 22 12:11:34 crc kubenswrapper[5120]: I0122 12:11:34.341136 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"76125ec9-7200-4d9a-8632-4f6a653c434c","Type":"ContainerDied","Data":"a944fcb82f3c6723cb691dd08da97990bc675ba7df78e295bfd7678975a8901f"} Jan 22 12:11:35 crc kubenswrapper[5120]: I0122 12:11:35.684656 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Jan 22 12:11:35 crc kubenswrapper[5120]: I0122 12:11:35.838054 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/76125ec9-7200-4d9a-8632-4f6a653c434c-buildworkdir\") pod \"76125ec9-7200-4d9a-8632-4f6a653c434c\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " Jan 22 12:11:35 crc kubenswrapper[5120]: I0122 12:11:35.838169 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/76125ec9-7200-4d9a-8632-4f6a653c434c-build-proxy-ca-bundles\") pod \"76125ec9-7200-4d9a-8632-4f6a653c434c\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " Jan 22 12:11:35 crc kubenswrapper[5120]: I0122 12:11:35.838204 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/76125ec9-7200-4d9a-8632-4f6a653c434c-build-ca-bundles\") pod \"76125ec9-7200-4d9a-8632-4f6a653c434c\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " Jan 22 12:11:35 crc kubenswrapper[5120]: I0122 12:11:35.838251 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/76125ec9-7200-4d9a-8632-4f6a653c434c-container-storage-root\") pod \"76125ec9-7200-4d9a-8632-4f6a653c434c\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " Jan 22 12:11:35 crc kubenswrapper[5120]: I0122 12:11:35.838310 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/76125ec9-7200-4d9a-8632-4f6a653c434c-buildcachedir\") pod \"76125ec9-7200-4d9a-8632-4f6a653c434c\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " Jan 22 12:11:35 crc kubenswrapper[5120]: I0122 12:11:35.838392 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/76125ec9-7200-4d9a-8632-4f6a653c434c-build-blob-cache\") pod \"76125ec9-7200-4d9a-8632-4f6a653c434c\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " Jan 22 12:11:35 crc kubenswrapper[5120]: I0122 12:11:35.838415 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/76125ec9-7200-4d9a-8632-4f6a653c434c-container-storage-run\") pod \"76125ec9-7200-4d9a-8632-4f6a653c434c\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " Jan 22 12:11:35 crc kubenswrapper[5120]: I0122 12:11:35.838442 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/76125ec9-7200-4d9a-8632-4f6a653c434c-build-system-configs\") pod \"76125ec9-7200-4d9a-8632-4f6a653c434c\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " Jan 22 12:11:35 crc kubenswrapper[5120]: I0122 12:11:35.838492 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/76125ec9-7200-4d9a-8632-4f6a653c434c-builder-dockercfg-hvzlm-push\") pod \"76125ec9-7200-4d9a-8632-4f6a653c434c\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " Jan 22 12:11:35 crc kubenswrapper[5120]: I0122 12:11:35.838514 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/76125ec9-7200-4d9a-8632-4f6a653c434c-builder-dockercfg-hvzlm-pull\") pod \"76125ec9-7200-4d9a-8632-4f6a653c434c\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " Jan 22 12:11:35 crc kubenswrapper[5120]: I0122 12:11:35.838840 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/76125ec9-7200-4d9a-8632-4f6a653c434c-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "76125ec9-7200-4d9a-8632-4f6a653c434c" (UID: "76125ec9-7200-4d9a-8632-4f6a653c434c"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 12:11:35 crc kubenswrapper[5120]: I0122 12:11:35.839110 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/76125ec9-7200-4d9a-8632-4f6a653c434c-node-pullsecrets\") pod \"76125ec9-7200-4d9a-8632-4f6a653c434c\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " Jan 22 12:11:35 crc kubenswrapper[5120]: I0122 12:11:35.839208 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/76125ec9-7200-4d9a-8632-4f6a653c434c-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "76125ec9-7200-4d9a-8632-4f6a653c434c" (UID: "76125ec9-7200-4d9a-8632-4f6a653c434c"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 12:11:35 crc kubenswrapper[5120]: I0122 12:11:35.839266 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68kdw\" (UniqueName: \"kubernetes.io/projected/76125ec9-7200-4d9a-8632-4f6a653c434c-kube-api-access-68kdw\") pod \"76125ec9-7200-4d9a-8632-4f6a653c434c\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " Jan 22 12:11:35 crc kubenswrapper[5120]: I0122 12:11:35.839735 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76125ec9-7200-4d9a-8632-4f6a653c434c-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "76125ec9-7200-4d9a-8632-4f6a653c434c" (UID: "76125ec9-7200-4d9a-8632-4f6a653c434c"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:11:35 crc kubenswrapper[5120]: I0122 12:11:35.840140 5120 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/76125ec9-7200-4d9a-8632-4f6a653c434c-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 22 12:11:35 crc kubenswrapper[5120]: I0122 12:11:35.840162 5120 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/76125ec9-7200-4d9a-8632-4f6a653c434c-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 22 12:11:35 crc kubenswrapper[5120]: I0122 12:11:35.840176 5120 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/76125ec9-7200-4d9a-8632-4f6a653c434c-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 22 12:11:35 crc kubenswrapper[5120]: I0122 12:11:35.840280 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76125ec9-7200-4d9a-8632-4f6a653c434c-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "76125ec9-7200-4d9a-8632-4f6a653c434c" (UID: "76125ec9-7200-4d9a-8632-4f6a653c434c"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:11:35 crc kubenswrapper[5120]: I0122 12:11:35.840578 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76125ec9-7200-4d9a-8632-4f6a653c434c-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "76125ec9-7200-4d9a-8632-4f6a653c434c" (UID: "76125ec9-7200-4d9a-8632-4f6a653c434c"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:11:35 crc kubenswrapper[5120]: I0122 12:11:35.841129 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76125ec9-7200-4d9a-8632-4f6a653c434c-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "76125ec9-7200-4d9a-8632-4f6a653c434c" (UID: "76125ec9-7200-4d9a-8632-4f6a653c434c"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:11:35 crc kubenswrapper[5120]: I0122 12:11:35.842305 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76125ec9-7200-4d9a-8632-4f6a653c434c-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "76125ec9-7200-4d9a-8632-4f6a653c434c" (UID: "76125ec9-7200-4d9a-8632-4f6a653c434c"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:11:35 crc kubenswrapper[5120]: I0122 12:11:35.847875 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76125ec9-7200-4d9a-8632-4f6a653c434c-builder-dockercfg-hvzlm-pull" (OuterVolumeSpecName: "builder-dockercfg-hvzlm-pull") pod "76125ec9-7200-4d9a-8632-4f6a653c434c" (UID: "76125ec9-7200-4d9a-8632-4f6a653c434c"). InnerVolumeSpecName "builder-dockercfg-hvzlm-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 12:11:35 crc kubenswrapper[5120]: I0122 12:11:35.848279 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76125ec9-7200-4d9a-8632-4f6a653c434c-builder-dockercfg-hvzlm-push" (OuterVolumeSpecName: "builder-dockercfg-hvzlm-push") pod "76125ec9-7200-4d9a-8632-4f6a653c434c" (UID: "76125ec9-7200-4d9a-8632-4f6a653c434c"). InnerVolumeSpecName "builder-dockercfg-hvzlm-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 12:11:35 crc kubenswrapper[5120]: I0122 12:11:35.848522 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76125ec9-7200-4d9a-8632-4f6a653c434c-kube-api-access-68kdw" (OuterVolumeSpecName: "kube-api-access-68kdw") pod "76125ec9-7200-4d9a-8632-4f6a653c434c" (UID: "76125ec9-7200-4d9a-8632-4f6a653c434c"). InnerVolumeSpecName "kube-api-access-68kdw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:11:35 crc kubenswrapper[5120]: I0122 12:11:35.941305 5120 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/76125ec9-7200-4d9a-8632-4f6a653c434c-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 22 12:11:35 crc kubenswrapper[5120]: I0122 12:11:35.941347 5120 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/76125ec9-7200-4d9a-8632-4f6a653c434c-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 22 12:11:35 crc kubenswrapper[5120]: I0122 12:11:35.941357 5120 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/76125ec9-7200-4d9a-8632-4f6a653c434c-builder-dockercfg-hvzlm-push\") on node \"crc\" DevicePath \"\"" Jan 22 12:11:35 crc kubenswrapper[5120]: I0122 12:11:35.941367 5120 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/76125ec9-7200-4d9a-8632-4f6a653c434c-builder-dockercfg-hvzlm-pull\") on node \"crc\" DevicePath \"\"" Jan 22 12:11:35 crc kubenswrapper[5120]: I0122 12:11:35.941377 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-68kdw\" (UniqueName: \"kubernetes.io/projected/76125ec9-7200-4d9a-8632-4f6a653c434c-kube-api-access-68kdw\") on node \"crc\" DevicePath \"\"" Jan 22 12:11:35 crc kubenswrapper[5120]: I0122 12:11:35.941388 5120 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/76125ec9-7200-4d9a-8632-4f6a653c434c-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 12:11:35 crc kubenswrapper[5120]: I0122 12:11:35.941397 5120 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/76125ec9-7200-4d9a-8632-4f6a653c434c-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 12:11:35 crc kubenswrapper[5120]: I0122 12:11:35.951944 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76125ec9-7200-4d9a-8632-4f6a653c434c-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "76125ec9-7200-4d9a-8632-4f6a653c434c" (UID: "76125ec9-7200-4d9a-8632-4f6a653c434c"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:11:36 crc kubenswrapper[5120]: I0122 12:11:36.043242 5120 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/76125ec9-7200-4d9a-8632-4f6a653c434c-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 22 12:11:36 crc kubenswrapper[5120]: I0122 12:11:36.359925 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"76125ec9-7200-4d9a-8632-4f6a653c434c","Type":"ContainerDied","Data":"a90092222f318a0a87bcff1fc50be1c6c98f3209f37eda836b41e5226bcff2b0"} Jan 22 12:11:36 crc kubenswrapper[5120]: I0122 12:11:36.360001 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a90092222f318a0a87bcff1fc50be1c6c98f3209f37eda836b41e5226bcff2b0" Jan 22 12:11:36 crc kubenswrapper[5120]: I0122 12:11:36.360012 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Jan 22 12:11:36 crc kubenswrapper[5120]: I0122 12:11:36.640019 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76125ec9-7200-4d9a-8632-4f6a653c434c-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "76125ec9-7200-4d9a-8632-4f6a653c434c" (UID: "76125ec9-7200-4d9a-8632-4f6a653c434c"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:11:36 crc kubenswrapper[5120]: I0122 12:11:36.652237 5120 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/76125ec9-7200-4d9a-8632-4f6a653c434c-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.450228 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.451156 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5dab9f1c-1f91-40c9-a40d-06e7e8573d49" containerName="extract-content" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.451175 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dab9f1c-1f91-40c9-a40d-06e7e8573d49" containerName="extract-content" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.451190 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="76125ec9-7200-4d9a-8632-4f6a653c434c" containerName="git-clone" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.451199 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="76125ec9-7200-4d9a-8632-4f6a653c434c" containerName="git-clone" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.451215 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5dab9f1c-1f91-40c9-a40d-06e7e8573d49" containerName="extract-utilities" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.451222 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dab9f1c-1f91-40c9-a40d-06e7e8573d49" containerName="extract-utilities" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.451231 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="76125ec9-7200-4d9a-8632-4f6a653c434c" containerName="manage-dockerfile" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.451238 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="76125ec9-7200-4d9a-8632-4f6a653c434c" containerName="manage-dockerfile" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.451250 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6b8619b7-91c0-4e9a-a414-e678f914250c" containerName="extract-content" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.451258 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b8619b7-91c0-4e9a-a414-e678f914250c" containerName="extract-content" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.451267 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6b8619b7-91c0-4e9a-a414-e678f914250c" containerName="registry-server" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.451274 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b8619b7-91c0-4e9a-a414-e678f914250c" containerName="registry-server" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.451289 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6b8619b7-91c0-4e9a-a414-e678f914250c" containerName="extract-utilities" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.451297 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b8619b7-91c0-4e9a-a414-e678f914250c" containerName="extract-utilities" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.451314 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5dab9f1c-1f91-40c9-a40d-06e7e8573d49" containerName="registry-server" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.451321 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dab9f1c-1f91-40c9-a40d-06e7e8573d49" containerName="registry-server" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.451343 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="76125ec9-7200-4d9a-8632-4f6a653c434c" containerName="docker-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.451349 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="76125ec9-7200-4d9a-8632-4f6a653c434c" containerName="docker-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.451475 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="6b8619b7-91c0-4e9a-a414-e678f914250c" containerName="registry-server" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.451486 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="5dab9f1c-1f91-40c9-a40d-06e7e8573d49" containerName="registry-server" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.451497 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="76125ec9-7200-4d9a-8632-4f6a653c434c" containerName="docker-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.469724 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.469890 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.472126 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-webhook-snmp-1-sys-config\"" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.472160 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-webhook-snmp-1-ca\"" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.472294 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-webhook-snmp-1-global-ca\"" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.481556 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-hvzlm\"" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.612879 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-builder-dockercfg-hvzlm-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.612930 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-builder-dockercfg-hvzlm-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.612971 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77m4r\" (UniqueName: \"kubernetes.io/projected/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-kube-api-access-77m4r\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.612988 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.613020 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.613072 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.613319 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.613389 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.613564 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.613770 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.613807 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.613848 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.715909 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-builder-dockercfg-hvzlm-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.716437 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-77m4r\" (UniqueName: \"kubernetes.io/projected/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-kube-api-access-77m4r\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.716686 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.717279 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.717731 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.717843 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.718050 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.718158 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.718231 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.718435 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.718473 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.718507 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.718529 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.718233 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.718291 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.718629 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.718653 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-builder-dockercfg-hvzlm-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.718836 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.719389 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.719585 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.720350 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.724373 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-builder-dockercfg-hvzlm-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.724439 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-builder-dockercfg-hvzlm-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.733286 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-77m4r\" (UniqueName: \"kubernetes.io/projected/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-kube-api-access-77m4r\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.788524 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:41 crc kubenswrapper[5120]: I0122 12:11:41.220545 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 22 12:11:41 crc kubenswrapper[5120]: I0122 12:11:41.404179 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc","Type":"ContainerStarted","Data":"ecb1085dcea5d3f742a090414f13d009680e89665e63e23de882bd7baa988a47"} Jan 22 12:11:42 crc kubenswrapper[5120]: I0122 12:11:42.412967 5120 generic.go:358] "Generic (PLEG): container finished" podID="ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc" containerID="a5730f83c539302e9c7d05a91bf4d467541f23cec12c856f65dd4d2e326aaa3d" exitCode=0 Jan 22 12:11:42 crc kubenswrapper[5120]: I0122 12:11:42.414109 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc","Type":"ContainerDied","Data":"a5730f83c539302e9c7d05a91bf4d467541f23cec12c856f65dd4d2e326aaa3d"} Jan 22 12:11:43 crc kubenswrapper[5120]: I0122 12:11:43.422355 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc","Type":"ContainerStarted","Data":"ca94ec37f0f4f202255912d584c8b4e606005abacd20fa13f40d08f34546862e"} Jan 22 12:11:43 crc kubenswrapper[5120]: I0122 12:11:43.448363 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-webhook-snmp-1-build" podStartSLOduration=3.448340345 podStartE2EDuration="3.448340345s" podCreationTimestamp="2026-01-22 12:11:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 12:11:43.447145976 +0000 UTC m=+1438.191094337" watchObservedRunningTime="2026-01-22 12:11:43.448340345 +0000 UTC m=+1438.192288686" Jan 22 12:11:51 crc kubenswrapper[5120]: I0122 12:11:51.155440 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 22 12:11:51 crc kubenswrapper[5120]: I0122 12:11:51.156803 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/prometheus-webhook-snmp-1-build" podUID="ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc" containerName="docker-build" containerID="cri-o://ca94ec37f0f4f202255912d584c8b4e606005abacd20fa13f40d08f34546862e" gracePeriod=30 Jan 22 12:11:51 crc kubenswrapper[5120]: I0122 12:11:51.499763 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-1-build_ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc/docker-build/0.log" Jan 22 12:11:51 crc kubenswrapper[5120]: I0122 12:11:51.500741 5120 generic.go:358] "Generic (PLEG): container finished" podID="ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc" containerID="ca94ec37f0f4f202255912d584c8b4e606005abacd20fa13f40d08f34546862e" exitCode=1 Jan 22 12:11:51 crc kubenswrapper[5120]: I0122 12:11:51.500849 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc","Type":"ContainerDied","Data":"ca94ec37f0f4f202255912d584c8b4e606005abacd20fa13f40d08f34546862e"} Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.297752 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-1-build_ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc/docker-build/0.log" Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.298503 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.349068 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-node-pullsecrets\") pod \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.349222 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc" (UID: "ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.349250 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-build-blob-cache\") pod \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.349432 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-container-storage-root\") pod \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.349472 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-build-ca-bundles\") pod \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.349572 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-builder-dockercfg-hvzlm-push\") pod \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.349619 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-buildcachedir\") pod \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.349658 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-builder-dockercfg-hvzlm-pull\") pod \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.349711 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc" (UID: "ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.349736 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-container-storage-run\") pod \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.349785 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77m4r\" (UniqueName: \"kubernetes.io/projected/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-kube-api-access-77m4r\") pod \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.349865 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-build-proxy-ca-bundles\") pod \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.349896 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-buildworkdir\") pod \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.349935 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-build-system-configs\") pod \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.350478 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc" (UID: "ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.350740 5120 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.350776 5120 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.350789 5120 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.350995 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc" (UID: "ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.359217 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-builder-dockercfg-hvzlm-pull" (OuterVolumeSpecName: "builder-dockercfg-hvzlm-pull") pod "ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc" (UID: "ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc"). InnerVolumeSpecName "builder-dockercfg-hvzlm-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.359501 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc" (UID: "ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.364789 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc" (UID: "ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.364838 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc" (UID: "ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.365269 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-builder-dockercfg-hvzlm-push" (OuterVolumeSpecName: "builder-dockercfg-hvzlm-push") pod "ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc" (UID: "ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc"). InnerVolumeSpecName "builder-dockercfg-hvzlm-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.368122 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc" (UID: "ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.372310 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-kube-api-access-77m4r" (OuterVolumeSpecName: "kube-api-access-77m4r") pod "ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc" (UID: "ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc"). InnerVolumeSpecName "kube-api-access-77m4r". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.407856 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc" (UID: "ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.452483 5120 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.452535 5120 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-builder-dockercfg-hvzlm-push\") on node \"crc\" DevicePath \"\"" Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.452547 5120 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-builder-dockercfg-hvzlm-pull\") on node \"crc\" DevicePath \"\"" Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.452559 5120 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.452570 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-77m4r\" (UniqueName: \"kubernetes.io/projected/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-kube-api-access-77m4r\") on node \"crc\" DevicePath \"\"" Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.452580 5120 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.452593 5120 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.452606 5120 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.452618 5120 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.512411 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-1-build_ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc/docker-build/0.log" Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.513131 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc","Type":"ContainerDied","Data":"ecb1085dcea5d3f742a090414f13d009680e89665e63e23de882bd7baa988a47"} Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.513208 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.513234 5120 scope.go:117] "RemoveContainer" containerID="ca94ec37f0f4f202255912d584c8b4e606005abacd20fa13f40d08f34546862e" Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.551934 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.562774 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.566690 5120 scope.go:117] "RemoveContainer" containerID="a5730f83c539302e9c7d05a91bf4d467541f23cec12c856f65dd4d2e326aaa3d" Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.849753 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.850633 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc" containerName="manage-dockerfile" Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.850657 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc" containerName="manage-dockerfile" Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.850699 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc" containerName="docker-build" Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.850706 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc" containerName="docker-build" Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.850837 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc" containerName="docker-build" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.078375 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.078551 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.081486 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-hvzlm\"" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.081487 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-webhook-snmp-2-global-ca\"" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.081814 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-webhook-snmp-2-ca\"" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.082871 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-webhook-snmp-2-sys-config\"" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.176176 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/aec972f4-74cd-403c-a0a5-2e56146e5aa2-builder-dockercfg-hvzlm-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.176238 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/aec972f4-74cd-403c-a0a5-2e56146e5aa2-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.176272 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/aec972f4-74cd-403c-a0a5-2e56146e5aa2-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.176287 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aec972f4-74cd-403c-a0a5-2e56146e5aa2-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.176309 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/aec972f4-74cd-403c-a0a5-2e56146e5aa2-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.176327 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9j5m\" (UniqueName: \"kubernetes.io/projected/aec972f4-74cd-403c-a0a5-2e56146e5aa2-kube-api-access-q9j5m\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.176344 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aec972f4-74cd-403c-a0a5-2e56146e5aa2-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.176363 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/aec972f4-74cd-403c-a0a5-2e56146e5aa2-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.176409 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/aec972f4-74cd-403c-a0a5-2e56146e5aa2-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.176438 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/aec972f4-74cd-403c-a0a5-2e56146e5aa2-builder-dockercfg-hvzlm-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.176454 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/aec972f4-74cd-403c-a0a5-2e56146e5aa2-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.176475 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/aec972f4-74cd-403c-a0a5-2e56146e5aa2-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.277565 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/aec972f4-74cd-403c-a0a5-2e56146e5aa2-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.277680 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/aec972f4-74cd-403c-a0a5-2e56146e5aa2-builder-dockercfg-hvzlm-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.277709 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/aec972f4-74cd-403c-a0a5-2e56146e5aa2-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.277740 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/aec972f4-74cd-403c-a0a5-2e56146e5aa2-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.277756 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/aec972f4-74cd-403c-a0a5-2e56146e5aa2-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.277796 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/aec972f4-74cd-403c-a0a5-2e56146e5aa2-builder-dockercfg-hvzlm-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.277833 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/aec972f4-74cd-403c-a0a5-2e56146e5aa2-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.278026 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/aec972f4-74cd-403c-a0a5-2e56146e5aa2-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.278186 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aec972f4-74cd-403c-a0a5-2e56146e5aa2-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.278454 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/aec972f4-74cd-403c-a0a5-2e56146e5aa2-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.278454 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/aec972f4-74cd-403c-a0a5-2e56146e5aa2-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.279133 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/aec972f4-74cd-403c-a0a5-2e56146e5aa2-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.279223 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aec972f4-74cd-403c-a0a5-2e56146e5aa2-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.279502 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/aec972f4-74cd-403c-a0a5-2e56146e5aa2-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.279729 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/aec972f4-74cd-403c-a0a5-2e56146e5aa2-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.279996 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/aec972f4-74cd-403c-a0a5-2e56146e5aa2-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.280094 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-q9j5m\" (UniqueName: \"kubernetes.io/projected/aec972f4-74cd-403c-a0a5-2e56146e5aa2-kube-api-access-q9j5m\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.280512 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aec972f4-74cd-403c-a0a5-2e56146e5aa2-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.280611 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/aec972f4-74cd-403c-a0a5-2e56146e5aa2-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.280787 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/aec972f4-74cd-403c-a0a5-2e56146e5aa2-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.282183 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aec972f4-74cd-403c-a0a5-2e56146e5aa2-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.297975 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/aec972f4-74cd-403c-a0a5-2e56146e5aa2-builder-dockercfg-hvzlm-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.297981 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/aec972f4-74cd-403c-a0a5-2e56146e5aa2-builder-dockercfg-hvzlm-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.300864 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9j5m\" (UniqueName: \"kubernetes.io/projected/aec972f4-74cd-403c-a0a5-2e56146e5aa2-kube-api-access-q9j5m\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.395766 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.583440 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc" path="/var/lib/kubelet/pods/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc/volumes" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.832297 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Jan 22 12:11:54 crc kubenswrapper[5120]: I0122 12:11:54.540412 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"aec972f4-74cd-403c-a0a5-2e56146e5aa2","Type":"ContainerStarted","Data":"05e7e3b8266433c149cdb4da43cac90ceffe24f2688ba6644117672b730ee9e5"} Jan 22 12:11:54 crc kubenswrapper[5120]: I0122 12:11:54.541142 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"aec972f4-74cd-403c-a0a5-2e56146e5aa2","Type":"ContainerStarted","Data":"3627cdac332d73eb137fa0d159e92cfafa1ce8488fa859f8ecc3dc50e6b5ea86"} Jan 22 12:11:54 crc kubenswrapper[5120]: E0122 12:11:54.746510 5120 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaec972f4_74cd_403c_a0a5_2e56146e5aa2.slice/crio-05e7e3b8266433c149cdb4da43cac90ceffe24f2688ba6644117672b730ee9e5.scope\": RecentStats: unable to find data in memory cache]" Jan 22 12:11:55 crc kubenswrapper[5120]: I0122 12:11:55.577509 5120 generic.go:358] "Generic (PLEG): container finished" podID="aec972f4-74cd-403c-a0a5-2e56146e5aa2" containerID="05e7e3b8266433c149cdb4da43cac90ceffe24f2688ba6644117672b730ee9e5" exitCode=0 Jan 22 12:11:55 crc kubenswrapper[5120]: I0122 12:11:55.595578 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"aec972f4-74cd-403c-a0a5-2e56146e5aa2","Type":"ContainerDied","Data":"05e7e3b8266433c149cdb4da43cac90ceffe24f2688ba6644117672b730ee9e5"} Jan 22 12:11:56 crc kubenswrapper[5120]: I0122 12:11:56.588034 5120 generic.go:358] "Generic (PLEG): container finished" podID="aec972f4-74cd-403c-a0a5-2e56146e5aa2" containerID="83551d9292ded413ca21fc8aea98430e64cb1d14c3daa2a17a085a00e029936a" exitCode=0 Jan 22 12:11:56 crc kubenswrapper[5120]: I0122 12:11:56.588282 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"aec972f4-74cd-403c-a0a5-2e56146e5aa2","Type":"ContainerDied","Data":"83551d9292ded413ca21fc8aea98430e64cb1d14c3daa2a17a085a00e029936a"} Jan 22 12:11:56 crc kubenswrapper[5120]: I0122 12:11:56.640892 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-2-build_aec972f4-74cd-403c-a0a5-2e56146e5aa2/manage-dockerfile/0.log" Jan 22 12:11:57 crc kubenswrapper[5120]: I0122 12:11:57.602749 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"aec972f4-74cd-403c-a0a5-2e56146e5aa2","Type":"ContainerStarted","Data":"7e060d71e2f1980f392f5ce6385ea239d85b0d5f2ce92a364866fef48791e99c"} Jan 22 12:11:57 crc kubenswrapper[5120]: I0122 12:11:57.636614 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-webhook-snmp-2-build" podStartSLOduration=5.636586576 podStartE2EDuration="5.636586576s" podCreationTimestamp="2026-01-22 12:11:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 12:11:57.632869185 +0000 UTC m=+1452.376817546" watchObservedRunningTime="2026-01-22 12:11:57.636586576 +0000 UTC m=+1452.380534937" Jan 22 12:12:00 crc kubenswrapper[5120]: I0122 12:12:00.139563 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29484732-pmd7b"] Jan 22 12:12:00 crc kubenswrapper[5120]: I0122 12:12:00.532902 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484732-pmd7b"] Jan 22 12:12:00 crc kubenswrapper[5120]: I0122 12:12:00.533019 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484732-pmd7b" Jan 22 12:12:00 crc kubenswrapper[5120]: I0122 12:12:00.537028 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g2chw\"" Jan 22 12:12:00 crc kubenswrapper[5120]: I0122 12:12:00.538713 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 22 12:12:00 crc kubenswrapper[5120]: I0122 12:12:00.542885 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 22 12:12:00 crc kubenswrapper[5120]: I0122 12:12:00.594824 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8l8k4\" (UniqueName: \"kubernetes.io/projected/2284d302-27de-4f84-9cd9-0b27dc76e987-kube-api-access-8l8k4\") pod \"auto-csr-approver-29484732-pmd7b\" (UID: \"2284d302-27de-4f84-9cd9-0b27dc76e987\") " pod="openshift-infra/auto-csr-approver-29484732-pmd7b" Jan 22 12:12:00 crc kubenswrapper[5120]: I0122 12:12:00.696838 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8l8k4\" (UniqueName: \"kubernetes.io/projected/2284d302-27de-4f84-9cd9-0b27dc76e987-kube-api-access-8l8k4\") pod \"auto-csr-approver-29484732-pmd7b\" (UID: \"2284d302-27de-4f84-9cd9-0b27dc76e987\") " pod="openshift-infra/auto-csr-approver-29484732-pmd7b" Jan 22 12:12:00 crc kubenswrapper[5120]: I0122 12:12:00.721259 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8l8k4\" (UniqueName: \"kubernetes.io/projected/2284d302-27de-4f84-9cd9-0b27dc76e987-kube-api-access-8l8k4\") pod \"auto-csr-approver-29484732-pmd7b\" (UID: \"2284d302-27de-4f84-9cd9-0b27dc76e987\") " pod="openshift-infra/auto-csr-approver-29484732-pmd7b" Jan 22 12:12:00 crc kubenswrapper[5120]: I0122 12:12:00.856943 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484732-pmd7b" Jan 22 12:12:01 crc kubenswrapper[5120]: I0122 12:12:01.065534 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484732-pmd7b"] Jan 22 12:12:01 crc kubenswrapper[5120]: I0122 12:12:01.637740 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484732-pmd7b" event={"ID":"2284d302-27de-4f84-9cd9-0b27dc76e987","Type":"ContainerStarted","Data":"c8ef452b457358c81bf8f5146854a6437155b070b39cdb1c8d13771b0583a114"} Jan 22 12:12:10 crc kubenswrapper[5120]: I0122 12:12:10.738082 5120 generic.go:358] "Generic (PLEG): container finished" podID="2284d302-27de-4f84-9cd9-0b27dc76e987" containerID="afab18be716ae606d212e93ff4cb99381fd77d17295864dd09555b0262bbf573" exitCode=0 Jan 22 12:12:10 crc kubenswrapper[5120]: I0122 12:12:10.738240 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484732-pmd7b" event={"ID":"2284d302-27de-4f84-9cd9-0b27dc76e987","Type":"ContainerDied","Data":"afab18be716ae606d212e93ff4cb99381fd77d17295864dd09555b0262bbf573"} Jan 22 12:12:12 crc kubenswrapper[5120]: I0122 12:12:12.002139 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484732-pmd7b" Jan 22 12:12:12 crc kubenswrapper[5120]: I0122 12:12:12.103663 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8l8k4\" (UniqueName: \"kubernetes.io/projected/2284d302-27de-4f84-9cd9-0b27dc76e987-kube-api-access-8l8k4\") pod \"2284d302-27de-4f84-9cd9-0b27dc76e987\" (UID: \"2284d302-27de-4f84-9cd9-0b27dc76e987\") " Jan 22 12:12:12 crc kubenswrapper[5120]: I0122 12:12:12.131129 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2284d302-27de-4f84-9cd9-0b27dc76e987-kube-api-access-8l8k4" (OuterVolumeSpecName: "kube-api-access-8l8k4") pod "2284d302-27de-4f84-9cd9-0b27dc76e987" (UID: "2284d302-27de-4f84-9cd9-0b27dc76e987"). InnerVolumeSpecName "kube-api-access-8l8k4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:12:12 crc kubenswrapper[5120]: I0122 12:12:12.205934 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8l8k4\" (UniqueName: \"kubernetes.io/projected/2284d302-27de-4f84-9cd9-0b27dc76e987-kube-api-access-8l8k4\") on node \"crc\" DevicePath \"\"" Jan 22 12:12:12 crc kubenswrapper[5120]: I0122 12:12:12.754804 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484732-pmd7b" Jan 22 12:12:12 crc kubenswrapper[5120]: I0122 12:12:12.754838 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484732-pmd7b" event={"ID":"2284d302-27de-4f84-9cd9-0b27dc76e987","Type":"ContainerDied","Data":"c8ef452b457358c81bf8f5146854a6437155b070b39cdb1c8d13771b0583a114"} Jan 22 12:12:12 crc kubenswrapper[5120]: I0122 12:12:12.754871 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8ef452b457358c81bf8f5146854a6437155b070b39cdb1c8d13771b0583a114" Jan 22 12:12:13 crc kubenswrapper[5120]: I0122 12:12:13.072985 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29484726-c8lz2"] Jan 22 12:12:13 crc kubenswrapper[5120]: I0122 12:12:13.080242 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29484726-c8lz2"] Jan 22 12:12:13 crc kubenswrapper[5120]: I0122 12:12:13.580039 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3858bc47-7853-4b6a-b130-aea8f1f3e8c7" path="/var/lib/kubelet/pods/3858bc47-7853-4b6a-b130-aea8f1f3e8c7/volumes" Jan 22 12:12:31 crc kubenswrapper[5120]: I0122 12:12:31.973057 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:12:31 crc kubenswrapper[5120]: I0122 12:12:31.974789 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:12:33 crc kubenswrapper[5120]: I0122 12:12:33.284502 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zqrjj"] Jan 22 12:12:33 crc kubenswrapper[5120]: I0122 12:12:33.285568 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2284d302-27de-4f84-9cd9-0b27dc76e987" containerName="oc" Jan 22 12:12:33 crc kubenswrapper[5120]: I0122 12:12:33.285584 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="2284d302-27de-4f84-9cd9-0b27dc76e987" containerName="oc" Jan 22 12:12:33 crc kubenswrapper[5120]: I0122 12:12:33.285705 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="2284d302-27de-4f84-9cd9-0b27dc76e987" containerName="oc" Jan 22 12:12:33 crc kubenswrapper[5120]: I0122 12:12:33.289597 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zqrjj" Jan 22 12:12:33 crc kubenswrapper[5120]: I0122 12:12:33.305753 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zqrjj"] Jan 22 12:12:33 crc kubenswrapper[5120]: I0122 12:12:33.333117 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28308e30-8c83-4b30-93e3-1aff509cf1dc-utilities\") pod \"community-operators-zqrjj\" (UID: \"28308e30-8c83-4b30-93e3-1aff509cf1dc\") " pod="openshift-marketplace/community-operators-zqrjj" Jan 22 12:12:33 crc kubenswrapper[5120]: I0122 12:12:33.333172 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28308e30-8c83-4b30-93e3-1aff509cf1dc-catalog-content\") pod \"community-operators-zqrjj\" (UID: \"28308e30-8c83-4b30-93e3-1aff509cf1dc\") " pod="openshift-marketplace/community-operators-zqrjj" Jan 22 12:12:33 crc kubenswrapper[5120]: I0122 12:12:33.333232 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtg7z\" (UniqueName: \"kubernetes.io/projected/28308e30-8c83-4b30-93e3-1aff509cf1dc-kube-api-access-dtg7z\") pod \"community-operators-zqrjj\" (UID: \"28308e30-8c83-4b30-93e3-1aff509cf1dc\") " pod="openshift-marketplace/community-operators-zqrjj" Jan 22 12:12:33 crc kubenswrapper[5120]: I0122 12:12:33.434312 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28308e30-8c83-4b30-93e3-1aff509cf1dc-utilities\") pod \"community-operators-zqrjj\" (UID: \"28308e30-8c83-4b30-93e3-1aff509cf1dc\") " pod="openshift-marketplace/community-operators-zqrjj" Jan 22 12:12:33 crc kubenswrapper[5120]: I0122 12:12:33.434361 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28308e30-8c83-4b30-93e3-1aff509cf1dc-catalog-content\") pod \"community-operators-zqrjj\" (UID: \"28308e30-8c83-4b30-93e3-1aff509cf1dc\") " pod="openshift-marketplace/community-operators-zqrjj" Jan 22 12:12:33 crc kubenswrapper[5120]: I0122 12:12:33.434408 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dtg7z\" (UniqueName: \"kubernetes.io/projected/28308e30-8c83-4b30-93e3-1aff509cf1dc-kube-api-access-dtg7z\") pod \"community-operators-zqrjj\" (UID: \"28308e30-8c83-4b30-93e3-1aff509cf1dc\") " pod="openshift-marketplace/community-operators-zqrjj" Jan 22 12:12:33 crc kubenswrapper[5120]: I0122 12:12:33.434982 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28308e30-8c83-4b30-93e3-1aff509cf1dc-utilities\") pod \"community-operators-zqrjj\" (UID: \"28308e30-8c83-4b30-93e3-1aff509cf1dc\") " pod="openshift-marketplace/community-operators-zqrjj" Jan 22 12:12:33 crc kubenswrapper[5120]: I0122 12:12:33.435080 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28308e30-8c83-4b30-93e3-1aff509cf1dc-catalog-content\") pod \"community-operators-zqrjj\" (UID: \"28308e30-8c83-4b30-93e3-1aff509cf1dc\") " pod="openshift-marketplace/community-operators-zqrjj" Jan 22 12:12:33 crc kubenswrapper[5120]: I0122 12:12:33.456248 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtg7z\" (UniqueName: \"kubernetes.io/projected/28308e30-8c83-4b30-93e3-1aff509cf1dc-kube-api-access-dtg7z\") pod \"community-operators-zqrjj\" (UID: \"28308e30-8c83-4b30-93e3-1aff509cf1dc\") " pod="openshift-marketplace/community-operators-zqrjj" Jan 22 12:12:33 crc kubenswrapper[5120]: I0122 12:12:33.606113 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zqrjj" Jan 22 12:12:33 crc kubenswrapper[5120]: I0122 12:12:33.901475 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zqrjj"] Jan 22 12:12:33 crc kubenswrapper[5120]: I0122 12:12:33.937895 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zqrjj" event={"ID":"28308e30-8c83-4b30-93e3-1aff509cf1dc","Type":"ContainerStarted","Data":"78da9bd235cb1cbf9c32214297c3faf8f9a8e55366f2f87f0a281db0b912c76d"} Jan 22 12:12:34 crc kubenswrapper[5120]: I0122 12:12:34.946389 5120 generic.go:358] "Generic (PLEG): container finished" podID="28308e30-8c83-4b30-93e3-1aff509cf1dc" containerID="15919de601a02f8d7223de39029e9f611fffc4c72aadf9633b4a376bf9bd33e5" exitCode=0 Jan 22 12:12:34 crc kubenswrapper[5120]: I0122 12:12:34.946778 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zqrjj" event={"ID":"28308e30-8c83-4b30-93e3-1aff509cf1dc","Type":"ContainerDied","Data":"15919de601a02f8d7223de39029e9f611fffc4c72aadf9633b4a376bf9bd33e5"} Jan 22 12:12:35 crc kubenswrapper[5120]: I0122 12:12:35.955260 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zqrjj" event={"ID":"28308e30-8c83-4b30-93e3-1aff509cf1dc","Type":"ContainerStarted","Data":"47c71f822e85394e50e7509e7e9e00925405fc12ea2e622022d8d286450cedab"} Jan 22 12:12:36 crc kubenswrapper[5120]: I0122 12:12:36.964288 5120 generic.go:358] "Generic (PLEG): container finished" podID="28308e30-8c83-4b30-93e3-1aff509cf1dc" containerID="47c71f822e85394e50e7509e7e9e00925405fc12ea2e622022d8d286450cedab" exitCode=0 Jan 22 12:12:36 crc kubenswrapper[5120]: I0122 12:12:36.964407 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zqrjj" event={"ID":"28308e30-8c83-4b30-93e3-1aff509cf1dc","Type":"ContainerDied","Data":"47c71f822e85394e50e7509e7e9e00925405fc12ea2e622022d8d286450cedab"} Jan 22 12:12:37 crc kubenswrapper[5120]: I0122 12:12:37.975631 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zqrjj" event={"ID":"28308e30-8c83-4b30-93e3-1aff509cf1dc","Type":"ContainerStarted","Data":"4cd7465cca5cd8c5318043e47ace75ae43787daaa2e75da3ea2f58c07fca3b5b"} Jan 22 12:12:38 crc kubenswrapper[5120]: I0122 12:12:38.007012 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zqrjj" podStartSLOduration=4.307357092 podStartE2EDuration="5.006937045s" podCreationTimestamp="2026-01-22 12:12:33 +0000 UTC" firstStartedPulling="2026-01-22 12:12:34.947775335 +0000 UTC m=+1489.691723686" lastFinishedPulling="2026-01-22 12:12:35.647355298 +0000 UTC m=+1490.391303639" observedRunningTime="2026-01-22 12:12:38.004325522 +0000 UTC m=+1492.748273863" watchObservedRunningTime="2026-01-22 12:12:38.006937045 +0000 UTC m=+1492.750885426" Jan 22 12:12:43 crc kubenswrapper[5120]: I0122 12:12:43.606894 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-zqrjj" Jan 22 12:12:43 crc kubenswrapper[5120]: I0122 12:12:43.607579 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zqrjj" Jan 22 12:12:43 crc kubenswrapper[5120]: I0122 12:12:43.679818 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zqrjj" Jan 22 12:12:44 crc kubenswrapper[5120]: I0122 12:12:44.083522 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zqrjj" Jan 22 12:12:44 crc kubenswrapper[5120]: I0122 12:12:44.131587 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zqrjj"] Jan 22 12:12:46 crc kubenswrapper[5120]: I0122 12:12:46.052899 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zqrjj" podUID="28308e30-8c83-4b30-93e3-1aff509cf1dc" containerName="registry-server" containerID="cri-o://4cd7465cca5cd8c5318043e47ace75ae43787daaa2e75da3ea2f58c07fca3b5b" gracePeriod=2 Jan 22 12:12:47 crc kubenswrapper[5120]: I0122 12:12:47.696190 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4lzht_67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087/kube-multus/0.log" Jan 22 12:12:47 crc kubenswrapper[5120]: I0122 12:12:47.696401 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4lzht_67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087/kube-multus/0.log" Jan 22 12:12:47 crc kubenswrapper[5120]: I0122 12:12:47.781496 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 22 12:12:47 crc kubenswrapper[5120]: I0122 12:12:47.781496 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 22 12:12:48 crc kubenswrapper[5120]: I0122 12:12:48.073563 5120 generic.go:358] "Generic (PLEG): container finished" podID="28308e30-8c83-4b30-93e3-1aff509cf1dc" containerID="4cd7465cca5cd8c5318043e47ace75ae43787daaa2e75da3ea2f58c07fca3b5b" exitCode=0 Jan 22 12:12:48 crc kubenswrapper[5120]: I0122 12:12:48.073674 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zqrjj" event={"ID":"28308e30-8c83-4b30-93e3-1aff509cf1dc","Type":"ContainerDied","Data":"4cd7465cca5cd8c5318043e47ace75ae43787daaa2e75da3ea2f58c07fca3b5b"} Jan 22 12:12:48 crc kubenswrapper[5120]: I0122 12:12:48.428661 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zqrjj" Jan 22 12:12:48 crc kubenswrapper[5120]: I0122 12:12:48.479169 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtg7z\" (UniqueName: \"kubernetes.io/projected/28308e30-8c83-4b30-93e3-1aff509cf1dc-kube-api-access-dtg7z\") pod \"28308e30-8c83-4b30-93e3-1aff509cf1dc\" (UID: \"28308e30-8c83-4b30-93e3-1aff509cf1dc\") " Jan 22 12:12:48 crc kubenswrapper[5120]: I0122 12:12:48.479233 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28308e30-8c83-4b30-93e3-1aff509cf1dc-utilities\") pod \"28308e30-8c83-4b30-93e3-1aff509cf1dc\" (UID: \"28308e30-8c83-4b30-93e3-1aff509cf1dc\") " Jan 22 12:12:48 crc kubenswrapper[5120]: I0122 12:12:48.479311 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28308e30-8c83-4b30-93e3-1aff509cf1dc-catalog-content\") pod \"28308e30-8c83-4b30-93e3-1aff509cf1dc\" (UID: \"28308e30-8c83-4b30-93e3-1aff509cf1dc\") " Jan 22 12:12:48 crc kubenswrapper[5120]: I0122 12:12:48.481473 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28308e30-8c83-4b30-93e3-1aff509cf1dc-utilities" (OuterVolumeSpecName: "utilities") pod "28308e30-8c83-4b30-93e3-1aff509cf1dc" (UID: "28308e30-8c83-4b30-93e3-1aff509cf1dc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:12:48 crc kubenswrapper[5120]: I0122 12:12:48.496257 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28308e30-8c83-4b30-93e3-1aff509cf1dc-kube-api-access-dtg7z" (OuterVolumeSpecName: "kube-api-access-dtg7z") pod "28308e30-8c83-4b30-93e3-1aff509cf1dc" (UID: "28308e30-8c83-4b30-93e3-1aff509cf1dc"). InnerVolumeSpecName "kube-api-access-dtg7z". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:12:48 crc kubenswrapper[5120]: I0122 12:12:48.541757 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28308e30-8c83-4b30-93e3-1aff509cf1dc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "28308e30-8c83-4b30-93e3-1aff509cf1dc" (UID: "28308e30-8c83-4b30-93e3-1aff509cf1dc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:12:48 crc kubenswrapper[5120]: I0122 12:12:48.581294 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dtg7z\" (UniqueName: \"kubernetes.io/projected/28308e30-8c83-4b30-93e3-1aff509cf1dc-kube-api-access-dtg7z\") on node \"crc\" DevicePath \"\"" Jan 22 12:12:48 crc kubenswrapper[5120]: I0122 12:12:48.581342 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28308e30-8c83-4b30-93e3-1aff509cf1dc-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 12:12:48 crc kubenswrapper[5120]: I0122 12:12:48.581353 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28308e30-8c83-4b30-93e3-1aff509cf1dc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 12:12:49 crc kubenswrapper[5120]: I0122 12:12:49.117429 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zqrjj" Jan 22 12:12:49 crc kubenswrapper[5120]: I0122 12:12:49.116935 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zqrjj" event={"ID":"28308e30-8c83-4b30-93e3-1aff509cf1dc","Type":"ContainerDied","Data":"78da9bd235cb1cbf9c32214297c3faf8f9a8e55366f2f87f0a281db0b912c76d"} Jan 22 12:12:49 crc kubenswrapper[5120]: I0122 12:12:49.118813 5120 scope.go:117] "RemoveContainer" containerID="4cd7465cca5cd8c5318043e47ace75ae43787daaa2e75da3ea2f58c07fca3b5b" Jan 22 12:12:49 crc kubenswrapper[5120]: I0122 12:12:49.157612 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zqrjj"] Jan 22 12:12:49 crc kubenswrapper[5120]: I0122 12:12:49.165019 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zqrjj"] Jan 22 12:12:49 crc kubenswrapper[5120]: I0122 12:12:49.385722 5120 scope.go:117] "RemoveContainer" containerID="47c71f822e85394e50e7509e7e9e00925405fc12ea2e622022d8d286450cedab" Jan 22 12:12:49 crc kubenswrapper[5120]: I0122 12:12:49.406275 5120 scope.go:117] "RemoveContainer" containerID="15919de601a02f8d7223de39029e9f611fffc4c72aadf9633b4a376bf9bd33e5" Jan 22 12:12:49 crc kubenswrapper[5120]: I0122 12:12:49.581085 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28308e30-8c83-4b30-93e3-1aff509cf1dc" path="/var/lib/kubelet/pods/28308e30-8c83-4b30-93e3-1aff509cf1dc/volumes" Jan 22 12:12:55 crc kubenswrapper[5120]: I0122 12:12:55.241680 5120 scope.go:117] "RemoveContainer" containerID="23dd071c493eb18691c5ccc422d25241938024f9dc9c51c1c687fd54070a5cca" Jan 22 12:13:01 crc kubenswrapper[5120]: I0122 12:13:01.972746 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:13:01 crc kubenswrapper[5120]: I0122 12:13:01.973764 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:13:05 crc kubenswrapper[5120]: I0122 12:13:05.278851 5120 generic.go:358] "Generic (PLEG): container finished" podID="aec972f4-74cd-403c-a0a5-2e56146e5aa2" containerID="7e060d71e2f1980f392f5ce6385ea239d85b0d5f2ce92a364866fef48791e99c" exitCode=0 Jan 22 12:13:05 crc kubenswrapper[5120]: I0122 12:13:05.279045 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"aec972f4-74cd-403c-a0a5-2e56146e5aa2","Type":"ContainerDied","Data":"7e060d71e2f1980f392f5ce6385ea239d85b0d5f2ce92a364866fef48791e99c"} Jan 22 12:13:06 crc kubenswrapper[5120]: I0122 12:13:06.575917 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:13:06 crc kubenswrapper[5120]: I0122 12:13:06.602147 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/aec972f4-74cd-403c-a0a5-2e56146e5aa2-buildworkdir\") pod \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " Jan 22 12:13:06 crc kubenswrapper[5120]: I0122 12:13:06.602244 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aec972f4-74cd-403c-a0a5-2e56146e5aa2-build-ca-bundles\") pod \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " Jan 22 12:13:06 crc kubenswrapper[5120]: I0122 12:13:06.602281 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/aec972f4-74cd-403c-a0a5-2e56146e5aa2-container-storage-run\") pod \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " Jan 22 12:13:06 crc kubenswrapper[5120]: I0122 12:13:06.602323 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/aec972f4-74cd-403c-a0a5-2e56146e5aa2-builder-dockercfg-hvzlm-pull\") pod \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " Jan 22 12:13:06 crc kubenswrapper[5120]: I0122 12:13:06.602343 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/aec972f4-74cd-403c-a0a5-2e56146e5aa2-build-blob-cache\") pod \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " Jan 22 12:13:06 crc kubenswrapper[5120]: I0122 12:13:06.602370 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/aec972f4-74cd-403c-a0a5-2e56146e5aa2-build-system-configs\") pod \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " Jan 22 12:13:06 crc kubenswrapper[5120]: I0122 12:13:06.602392 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/aec972f4-74cd-403c-a0a5-2e56146e5aa2-node-pullsecrets\") pod \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " Jan 22 12:13:06 crc kubenswrapper[5120]: I0122 12:13:06.602438 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aec972f4-74cd-403c-a0a5-2e56146e5aa2-build-proxy-ca-bundles\") pod \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " Jan 22 12:13:06 crc kubenswrapper[5120]: I0122 12:13:06.602483 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/aec972f4-74cd-403c-a0a5-2e56146e5aa2-builder-dockercfg-hvzlm-push\") pod \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " Jan 22 12:13:06 crc kubenswrapper[5120]: I0122 12:13:06.602600 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/aec972f4-74cd-403c-a0a5-2e56146e5aa2-container-storage-root\") pod \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " Jan 22 12:13:06 crc kubenswrapper[5120]: I0122 12:13:06.602664 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9j5m\" (UniqueName: \"kubernetes.io/projected/aec972f4-74cd-403c-a0a5-2e56146e5aa2-kube-api-access-q9j5m\") pod \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " Jan 22 12:13:06 crc kubenswrapper[5120]: I0122 12:13:06.602683 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/aec972f4-74cd-403c-a0a5-2e56146e5aa2-buildcachedir\") pod \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " Jan 22 12:13:06 crc kubenswrapper[5120]: I0122 12:13:06.603174 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aec972f4-74cd-403c-a0a5-2e56146e5aa2-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "aec972f4-74cd-403c-a0a5-2e56146e5aa2" (UID: "aec972f4-74cd-403c-a0a5-2e56146e5aa2"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 12:13:06 crc kubenswrapper[5120]: I0122 12:13:06.604985 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aec972f4-74cd-403c-a0a5-2e56146e5aa2-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "aec972f4-74cd-403c-a0a5-2e56146e5aa2" (UID: "aec972f4-74cd-403c-a0a5-2e56146e5aa2"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:13:06 crc kubenswrapper[5120]: I0122 12:13:06.605016 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aec972f4-74cd-403c-a0a5-2e56146e5aa2-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "aec972f4-74cd-403c-a0a5-2e56146e5aa2" (UID: "aec972f4-74cd-403c-a0a5-2e56146e5aa2"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:13:06 crc kubenswrapper[5120]: I0122 12:13:06.605102 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aec972f4-74cd-403c-a0a5-2e56146e5aa2-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "aec972f4-74cd-403c-a0a5-2e56146e5aa2" (UID: "aec972f4-74cd-403c-a0a5-2e56146e5aa2"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 12:13:06 crc kubenswrapper[5120]: I0122 12:13:06.606545 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aec972f4-74cd-403c-a0a5-2e56146e5aa2-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "aec972f4-74cd-403c-a0a5-2e56146e5aa2" (UID: "aec972f4-74cd-403c-a0a5-2e56146e5aa2"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:13:06 crc kubenswrapper[5120]: I0122 12:13:06.606906 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aec972f4-74cd-403c-a0a5-2e56146e5aa2-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "aec972f4-74cd-403c-a0a5-2e56146e5aa2" (UID: "aec972f4-74cd-403c-a0a5-2e56146e5aa2"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:13:06 crc kubenswrapper[5120]: I0122 12:13:06.608922 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aec972f4-74cd-403c-a0a5-2e56146e5aa2-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "aec972f4-74cd-403c-a0a5-2e56146e5aa2" (UID: "aec972f4-74cd-403c-a0a5-2e56146e5aa2"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:13:06 crc kubenswrapper[5120]: I0122 12:13:06.613407 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aec972f4-74cd-403c-a0a5-2e56146e5aa2-builder-dockercfg-hvzlm-push" (OuterVolumeSpecName: "builder-dockercfg-hvzlm-push") pod "aec972f4-74cd-403c-a0a5-2e56146e5aa2" (UID: "aec972f4-74cd-403c-a0a5-2e56146e5aa2"). InnerVolumeSpecName "builder-dockercfg-hvzlm-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 12:13:06 crc kubenswrapper[5120]: I0122 12:13:06.613937 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aec972f4-74cd-403c-a0a5-2e56146e5aa2-kube-api-access-q9j5m" (OuterVolumeSpecName: "kube-api-access-q9j5m") pod "aec972f4-74cd-403c-a0a5-2e56146e5aa2" (UID: "aec972f4-74cd-403c-a0a5-2e56146e5aa2"). InnerVolumeSpecName "kube-api-access-q9j5m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:13:06 crc kubenswrapper[5120]: I0122 12:13:06.616744 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aec972f4-74cd-403c-a0a5-2e56146e5aa2-builder-dockercfg-hvzlm-pull" (OuterVolumeSpecName: "builder-dockercfg-hvzlm-pull") pod "aec972f4-74cd-403c-a0a5-2e56146e5aa2" (UID: "aec972f4-74cd-403c-a0a5-2e56146e5aa2"). InnerVolumeSpecName "builder-dockercfg-hvzlm-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 12:13:06 crc kubenswrapper[5120]: I0122 12:13:06.704743 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q9j5m\" (UniqueName: \"kubernetes.io/projected/aec972f4-74cd-403c-a0a5-2e56146e5aa2-kube-api-access-q9j5m\") on node \"crc\" DevicePath \"\"" Jan 22 12:13:06 crc kubenswrapper[5120]: I0122 12:13:06.704807 5120 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/aec972f4-74cd-403c-a0a5-2e56146e5aa2-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 22 12:13:06 crc kubenswrapper[5120]: I0122 12:13:06.704834 5120 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/aec972f4-74cd-403c-a0a5-2e56146e5aa2-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 22 12:13:06 crc kubenswrapper[5120]: I0122 12:13:06.704861 5120 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aec972f4-74cd-403c-a0a5-2e56146e5aa2-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 12:13:06 crc kubenswrapper[5120]: I0122 12:13:06.704884 5120 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/aec972f4-74cd-403c-a0a5-2e56146e5aa2-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 22 12:13:06 crc kubenswrapper[5120]: I0122 12:13:06.704907 5120 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/aec972f4-74cd-403c-a0a5-2e56146e5aa2-builder-dockercfg-hvzlm-pull\") on node \"crc\" DevicePath \"\"" Jan 22 12:13:06 crc kubenswrapper[5120]: I0122 12:13:06.704932 5120 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/aec972f4-74cd-403c-a0a5-2e56146e5aa2-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 22 12:13:06 crc kubenswrapper[5120]: I0122 12:13:06.704983 5120 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/aec972f4-74cd-403c-a0a5-2e56146e5aa2-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 22 12:13:06 crc kubenswrapper[5120]: I0122 12:13:06.705008 5120 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aec972f4-74cd-403c-a0a5-2e56146e5aa2-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 12:13:06 crc kubenswrapper[5120]: I0122 12:13:06.705032 5120 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/aec972f4-74cd-403c-a0a5-2e56146e5aa2-builder-dockercfg-hvzlm-push\") on node \"crc\" DevicePath \"\"" Jan 22 12:13:06 crc kubenswrapper[5120]: I0122 12:13:06.745165 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aec972f4-74cd-403c-a0a5-2e56146e5aa2-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "aec972f4-74cd-403c-a0a5-2e56146e5aa2" (UID: "aec972f4-74cd-403c-a0a5-2e56146e5aa2"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:13:06 crc kubenswrapper[5120]: I0122 12:13:06.806671 5120 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/aec972f4-74cd-403c-a0a5-2e56146e5aa2-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 22 12:13:07 crc kubenswrapper[5120]: I0122 12:13:07.300196 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"aec972f4-74cd-403c-a0a5-2e56146e5aa2","Type":"ContainerDied","Data":"3627cdac332d73eb137fa0d159e92cfafa1ce8488fa859f8ecc3dc50e6b5ea86"} Jan 22 12:13:07 crc kubenswrapper[5120]: I0122 12:13:07.300247 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3627cdac332d73eb137fa0d159e92cfafa1ce8488fa859f8ecc3dc50e6b5ea86" Jan 22 12:13:07 crc kubenswrapper[5120]: I0122 12:13:07.300332 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:13:07 crc kubenswrapper[5120]: I0122 12:13:07.642311 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aec972f4-74cd-403c-a0a5-2e56146e5aa2-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "aec972f4-74cd-403c-a0a5-2e56146e5aa2" (UID: "aec972f4-74cd-403c-a0a5-2e56146e5aa2"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:13:07 crc kubenswrapper[5120]: I0122 12:13:07.723347 5120 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/aec972f4-74cd-403c-a0a5-2e56146e5aa2-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 22 12:13:12 crc kubenswrapper[5120]: I0122 12:13:12.537066 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-84c66d88-wp5jc"] Jan 22 12:13:12 crc kubenswrapper[5120]: I0122 12:13:12.538212 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="aec972f4-74cd-403c-a0a5-2e56146e5aa2" containerName="docker-build" Jan 22 12:13:12 crc kubenswrapper[5120]: I0122 12:13:12.538233 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="aec972f4-74cd-403c-a0a5-2e56146e5aa2" containerName="docker-build" Jan 22 12:13:12 crc kubenswrapper[5120]: I0122 12:13:12.538248 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="28308e30-8c83-4b30-93e3-1aff509cf1dc" containerName="extract-utilities" Jan 22 12:13:12 crc kubenswrapper[5120]: I0122 12:13:12.538256 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="28308e30-8c83-4b30-93e3-1aff509cf1dc" containerName="extract-utilities" Jan 22 12:13:12 crc kubenswrapper[5120]: I0122 12:13:12.538289 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="aec972f4-74cd-403c-a0a5-2e56146e5aa2" containerName="manage-dockerfile" Jan 22 12:13:12 crc kubenswrapper[5120]: I0122 12:13:12.538301 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="aec972f4-74cd-403c-a0a5-2e56146e5aa2" containerName="manage-dockerfile" Jan 22 12:13:12 crc kubenswrapper[5120]: I0122 12:13:12.538315 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="aec972f4-74cd-403c-a0a5-2e56146e5aa2" containerName="git-clone" Jan 22 12:13:12 crc kubenswrapper[5120]: I0122 12:13:12.538323 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="aec972f4-74cd-403c-a0a5-2e56146e5aa2" containerName="git-clone" Jan 22 12:13:12 crc kubenswrapper[5120]: I0122 12:13:12.538333 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="28308e30-8c83-4b30-93e3-1aff509cf1dc" containerName="registry-server" Jan 22 12:13:12 crc kubenswrapper[5120]: I0122 12:13:12.538340 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="28308e30-8c83-4b30-93e3-1aff509cf1dc" containerName="registry-server" Jan 22 12:13:12 crc kubenswrapper[5120]: I0122 12:13:12.538357 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="28308e30-8c83-4b30-93e3-1aff509cf1dc" containerName="extract-content" Jan 22 12:13:12 crc kubenswrapper[5120]: I0122 12:13:12.538364 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="28308e30-8c83-4b30-93e3-1aff509cf1dc" containerName="extract-content" Jan 22 12:13:12 crc kubenswrapper[5120]: I0122 12:13:12.538506 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="aec972f4-74cd-403c-a0a5-2e56146e5aa2" containerName="docker-build" Jan 22 12:13:12 crc kubenswrapper[5120]: I0122 12:13:12.538526 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="28308e30-8c83-4b30-93e3-1aff509cf1dc" containerName="registry-server" Jan 22 12:13:12 crc kubenswrapper[5120]: I0122 12:13:12.547807 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-84c66d88-wp5jc" Jan 22 12:13:12 crc kubenswrapper[5120]: I0122 12:13:12.551556 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-dockercfg-8gw2f\"" Jan 22 12:13:12 crc kubenswrapper[5120]: I0122 12:13:12.553011 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-84c66d88-wp5jc"] Jan 22 12:13:12 crc kubenswrapper[5120]: I0122 12:13:12.600818 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/8f9d3100-17a5-4c92-bf93-17c74efea49f-runner\") pod \"smart-gateway-operator-84c66d88-wp5jc\" (UID: \"8f9d3100-17a5-4c92-bf93-17c74efea49f\") " pod="service-telemetry/smart-gateway-operator-84c66d88-wp5jc" Jan 22 12:13:12 crc kubenswrapper[5120]: I0122 12:13:12.600893 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx4wv\" (UniqueName: \"kubernetes.io/projected/8f9d3100-17a5-4c92-bf93-17c74efea49f-kube-api-access-lx4wv\") pod \"smart-gateway-operator-84c66d88-wp5jc\" (UID: \"8f9d3100-17a5-4c92-bf93-17c74efea49f\") " pod="service-telemetry/smart-gateway-operator-84c66d88-wp5jc" Jan 22 12:13:12 crc kubenswrapper[5120]: I0122 12:13:12.702369 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lx4wv\" (UniqueName: \"kubernetes.io/projected/8f9d3100-17a5-4c92-bf93-17c74efea49f-kube-api-access-lx4wv\") pod \"smart-gateway-operator-84c66d88-wp5jc\" (UID: \"8f9d3100-17a5-4c92-bf93-17c74efea49f\") " pod="service-telemetry/smart-gateway-operator-84c66d88-wp5jc" Jan 22 12:13:12 crc kubenswrapper[5120]: I0122 12:13:12.703517 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/8f9d3100-17a5-4c92-bf93-17c74efea49f-runner\") pod \"smart-gateway-operator-84c66d88-wp5jc\" (UID: \"8f9d3100-17a5-4c92-bf93-17c74efea49f\") " pod="service-telemetry/smart-gateway-operator-84c66d88-wp5jc" Jan 22 12:13:12 crc kubenswrapper[5120]: I0122 12:13:12.704814 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/8f9d3100-17a5-4c92-bf93-17c74efea49f-runner\") pod \"smart-gateway-operator-84c66d88-wp5jc\" (UID: \"8f9d3100-17a5-4c92-bf93-17c74efea49f\") " pod="service-telemetry/smart-gateway-operator-84c66d88-wp5jc" Jan 22 12:13:12 crc kubenswrapper[5120]: I0122 12:13:12.727449 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lx4wv\" (UniqueName: \"kubernetes.io/projected/8f9d3100-17a5-4c92-bf93-17c74efea49f-kube-api-access-lx4wv\") pod \"smart-gateway-operator-84c66d88-wp5jc\" (UID: \"8f9d3100-17a5-4c92-bf93-17c74efea49f\") " pod="service-telemetry/smart-gateway-operator-84c66d88-wp5jc" Jan 22 12:13:12 crc kubenswrapper[5120]: I0122 12:13:12.881819 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-84c66d88-wp5jc" Jan 22 12:13:13 crc kubenswrapper[5120]: I0122 12:13:13.145815 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-84c66d88-wp5jc"] Jan 22 12:13:13 crc kubenswrapper[5120]: W0122 12:13:13.151177 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f9d3100_17a5_4c92_bf93_17c74efea49f.slice/crio-c1ad324f0f10379d7e4bf1f0b32fbd2b35710a26419db0a7d2984f3d32503f9c WatchSource:0}: Error finding container c1ad324f0f10379d7e4bf1f0b32fbd2b35710a26419db0a7d2984f3d32503f9c: Status 404 returned error can't find the container with id c1ad324f0f10379d7e4bf1f0b32fbd2b35710a26419db0a7d2984f3d32503f9c Jan 22 12:13:13 crc kubenswrapper[5120]: I0122 12:13:13.348827 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-84c66d88-wp5jc" event={"ID":"8f9d3100-17a5-4c92-bf93-17c74efea49f","Type":"ContainerStarted","Data":"c1ad324f0f10379d7e4bf1f0b32fbd2b35710a26419db0a7d2984f3d32503f9c"} Jan 22 12:13:16 crc kubenswrapper[5120]: I0122 12:13:16.127025 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-69f575f8bc-9msdn"] Jan 22 12:13:16 crc kubenswrapper[5120]: I0122 12:13:16.720765 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-69f575f8bc-9msdn"] Jan 22 12:13:16 crc kubenswrapper[5120]: I0122 12:13:16.720927 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-69f575f8bc-9msdn" Jan 22 12:13:16 crc kubenswrapper[5120]: I0122 12:13:16.723053 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-dockercfg-tzsgp\"" Jan 22 12:13:16 crc kubenswrapper[5120]: I0122 12:13:16.876126 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dxrk\" (UniqueName: \"kubernetes.io/projected/71c6d75c-6634-4017-92b9-487a57bcc47b-kube-api-access-6dxrk\") pod \"service-telemetry-operator-69f575f8bc-9msdn\" (UID: \"71c6d75c-6634-4017-92b9-487a57bcc47b\") " pod="service-telemetry/service-telemetry-operator-69f575f8bc-9msdn" Jan 22 12:13:16 crc kubenswrapper[5120]: I0122 12:13:16.876201 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/71c6d75c-6634-4017-92b9-487a57bcc47b-runner\") pod \"service-telemetry-operator-69f575f8bc-9msdn\" (UID: \"71c6d75c-6634-4017-92b9-487a57bcc47b\") " pod="service-telemetry/service-telemetry-operator-69f575f8bc-9msdn" Jan 22 12:13:16 crc kubenswrapper[5120]: I0122 12:13:16.978043 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6dxrk\" (UniqueName: \"kubernetes.io/projected/71c6d75c-6634-4017-92b9-487a57bcc47b-kube-api-access-6dxrk\") pod \"service-telemetry-operator-69f575f8bc-9msdn\" (UID: \"71c6d75c-6634-4017-92b9-487a57bcc47b\") " pod="service-telemetry/service-telemetry-operator-69f575f8bc-9msdn" Jan 22 12:13:16 crc kubenswrapper[5120]: I0122 12:13:16.978241 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/71c6d75c-6634-4017-92b9-487a57bcc47b-runner\") pod \"service-telemetry-operator-69f575f8bc-9msdn\" (UID: \"71c6d75c-6634-4017-92b9-487a57bcc47b\") " pod="service-telemetry/service-telemetry-operator-69f575f8bc-9msdn" Jan 22 12:13:16 crc kubenswrapper[5120]: I0122 12:13:16.978980 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/71c6d75c-6634-4017-92b9-487a57bcc47b-runner\") pod \"service-telemetry-operator-69f575f8bc-9msdn\" (UID: \"71c6d75c-6634-4017-92b9-487a57bcc47b\") " pod="service-telemetry/service-telemetry-operator-69f575f8bc-9msdn" Jan 22 12:13:17 crc kubenswrapper[5120]: I0122 12:13:17.009599 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dxrk\" (UniqueName: \"kubernetes.io/projected/71c6d75c-6634-4017-92b9-487a57bcc47b-kube-api-access-6dxrk\") pod \"service-telemetry-operator-69f575f8bc-9msdn\" (UID: \"71c6d75c-6634-4017-92b9-487a57bcc47b\") " pod="service-telemetry/service-telemetry-operator-69f575f8bc-9msdn" Jan 22 12:13:17 crc kubenswrapper[5120]: I0122 12:13:17.040184 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-69f575f8bc-9msdn" Jan 22 12:13:27 crc kubenswrapper[5120]: I0122 12:13:27.120180 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-69f575f8bc-9msdn"] Jan 22 12:13:28 crc kubenswrapper[5120]: I0122 12:13:28.494199 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-69f575f8bc-9msdn" event={"ID":"71c6d75c-6634-4017-92b9-487a57bcc47b","Type":"ContainerStarted","Data":"a05c553685a347cc1108be355ff912afacd0408d86bf990855d241612c189e06"} Jan 22 12:13:31 crc kubenswrapper[5120]: I0122 12:13:31.972566 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:13:31 crc kubenswrapper[5120]: I0122 12:13:31.972662 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:13:31 crc kubenswrapper[5120]: I0122 12:13:31.972732 5120 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dq269" Jan 22 12:13:31 crc kubenswrapper[5120]: I0122 12:13:31.973571 5120 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"719354116d7ea0573a90aa1ae4bf7fd19ddeee3f2ea6145219b58e58618f132f"} pod="openshift-machine-config-operator/machine-config-daemon-dq269" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 12:13:31 crc kubenswrapper[5120]: I0122 12:13:31.973633 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" containerID="cri-o://719354116d7ea0573a90aa1ae4bf7fd19ddeee3f2ea6145219b58e58618f132f" gracePeriod=600 Jan 22 12:13:33 crc kubenswrapper[5120]: I0122 12:13:33.534004 5120 generic.go:358] "Generic (PLEG): container finished" podID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerID="719354116d7ea0573a90aa1ae4bf7fd19ddeee3f2ea6145219b58e58618f132f" exitCode=0 Jan 22 12:13:33 crc kubenswrapper[5120]: I0122 12:13:33.534097 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" event={"ID":"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9","Type":"ContainerDied","Data":"719354116d7ea0573a90aa1ae4bf7fd19ddeee3f2ea6145219b58e58618f132f"} Jan 22 12:13:33 crc kubenswrapper[5120]: I0122 12:13:33.534167 5120 scope.go:117] "RemoveContainer" containerID="0ce45fe111abe3fb25265c0d4114782f8899115da5ec0e060bbf1264c0bf05d4" Jan 22 12:13:34 crc kubenswrapper[5120]: I0122 12:13:34.545640 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-84c66d88-wp5jc" event={"ID":"8f9d3100-17a5-4c92-bf93-17c74efea49f","Type":"ContainerStarted","Data":"15bc519c44c271587bd2ef9f8859c7f75171cb70dd45fe5bd26e4304eb0c6206"} Jan 22 12:13:34 crc kubenswrapper[5120]: I0122 12:13:34.550855 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" event={"ID":"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9","Type":"ContainerStarted","Data":"eda097a757f91e81d87c633c172aa5f1c9e7f79ccd5da35f6dbb6ffc692dc58d"} Jan 22 12:13:34 crc kubenswrapper[5120]: I0122 12:13:34.569088 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-84c66d88-wp5jc" podStartSLOduration=1.762455828 podStartE2EDuration="22.569063215s" podCreationTimestamp="2026-01-22 12:13:12 +0000 UTC" firstStartedPulling="2026-01-22 12:13:13.152858876 +0000 UTC m=+1527.896807207" lastFinishedPulling="2026-01-22 12:13:33.959466243 +0000 UTC m=+1548.703414594" observedRunningTime="2026-01-22 12:13:34.565088099 +0000 UTC m=+1549.309036440" watchObservedRunningTime="2026-01-22 12:13:34.569063215 +0000 UTC m=+1549.313011556" Jan 22 12:13:40 crc kubenswrapper[5120]: I0122 12:13:40.616389 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-69f575f8bc-9msdn" event={"ID":"71c6d75c-6634-4017-92b9-487a57bcc47b","Type":"ContainerStarted","Data":"3419440dd3ec67879ab184544f0d29d3207e2973d23ba74b4c204745af173815"} Jan 22 12:13:40 crc kubenswrapper[5120]: I0122 12:13:40.640328 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-69f575f8bc-9msdn" podStartSLOduration=11.906733284 podStartE2EDuration="24.640310764s" podCreationTimestamp="2026-01-22 12:13:16 +0000 UTC" firstStartedPulling="2026-01-22 12:13:27.47081654 +0000 UTC m=+1542.214764881" lastFinishedPulling="2026-01-22 12:13:40.204394 +0000 UTC m=+1554.948342361" observedRunningTime="2026-01-22 12:13:40.636059851 +0000 UTC m=+1555.380008212" watchObservedRunningTime="2026-01-22 12:13:40.640310764 +0000 UTC m=+1555.384259105" Jan 22 12:14:00 crc kubenswrapper[5120]: I0122 12:14:00.156827 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29484734-7jmnm"] Jan 22 12:14:00 crc kubenswrapper[5120]: I0122 12:14:00.755890 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484734-7jmnm"] Jan 22 12:14:00 crc kubenswrapper[5120]: I0122 12:14:00.756201 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484734-7jmnm" Jan 22 12:14:00 crc kubenswrapper[5120]: I0122 12:14:00.760398 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 22 12:14:00 crc kubenswrapper[5120]: I0122 12:14:00.762232 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 22 12:14:00 crc kubenswrapper[5120]: I0122 12:14:00.762262 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g2chw\"" Jan 22 12:14:00 crc kubenswrapper[5120]: I0122 12:14:00.878432 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qkj7\" (UniqueName: \"kubernetes.io/projected/2c1b3bc9-3782-474e-a90c-86f0ba86fa6a-kube-api-access-7qkj7\") pod \"auto-csr-approver-29484734-7jmnm\" (UID: \"2c1b3bc9-3782-474e-a90c-86f0ba86fa6a\") " pod="openshift-infra/auto-csr-approver-29484734-7jmnm" Jan 22 12:14:00 crc kubenswrapper[5120]: I0122 12:14:00.980969 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7qkj7\" (UniqueName: \"kubernetes.io/projected/2c1b3bc9-3782-474e-a90c-86f0ba86fa6a-kube-api-access-7qkj7\") pod \"auto-csr-approver-29484734-7jmnm\" (UID: \"2c1b3bc9-3782-474e-a90c-86f0ba86fa6a\") " pod="openshift-infra/auto-csr-approver-29484734-7jmnm" Jan 22 12:14:01 crc kubenswrapper[5120]: I0122 12:14:01.003806 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qkj7\" (UniqueName: \"kubernetes.io/projected/2c1b3bc9-3782-474e-a90c-86f0ba86fa6a-kube-api-access-7qkj7\") pod \"auto-csr-approver-29484734-7jmnm\" (UID: \"2c1b3bc9-3782-474e-a90c-86f0ba86fa6a\") " pod="openshift-infra/auto-csr-approver-29484734-7jmnm" Jan 22 12:14:01 crc kubenswrapper[5120]: I0122 12:14:01.095783 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484734-7jmnm" Jan 22 12:14:01 crc kubenswrapper[5120]: I0122 12:14:01.321522 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484734-7jmnm"] Jan 22 12:14:01 crc kubenswrapper[5120]: I0122 12:14:01.785703 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484734-7jmnm" event={"ID":"2c1b3bc9-3782-474e-a90c-86f0ba86fa6a","Type":"ContainerStarted","Data":"457680467a87c168acb336fde84c6785d065ccc55d5d03b07ac77578c2019e6f"} Jan 22 12:14:02 crc kubenswrapper[5120]: I0122 12:14:02.795896 5120 generic.go:358] "Generic (PLEG): container finished" podID="2c1b3bc9-3782-474e-a90c-86f0ba86fa6a" containerID="21b98295bffce8d00861339ce4655dd1e74538d2d7b8c008a2e3013d23d808e0" exitCode=0 Jan 22 12:14:02 crc kubenswrapper[5120]: I0122 12:14:02.796014 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484734-7jmnm" event={"ID":"2c1b3bc9-3782-474e-a90c-86f0ba86fa6a","Type":"ContainerDied","Data":"21b98295bffce8d00861339ce4655dd1e74538d2d7b8c008a2e3013d23d808e0"} Jan 22 12:14:04 crc kubenswrapper[5120]: I0122 12:14:04.048675 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484734-7jmnm" Jan 22 12:14:04 crc kubenswrapper[5120]: I0122 12:14:04.125604 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qkj7\" (UniqueName: \"kubernetes.io/projected/2c1b3bc9-3782-474e-a90c-86f0ba86fa6a-kube-api-access-7qkj7\") pod \"2c1b3bc9-3782-474e-a90c-86f0ba86fa6a\" (UID: \"2c1b3bc9-3782-474e-a90c-86f0ba86fa6a\") " Jan 22 12:14:04 crc kubenswrapper[5120]: I0122 12:14:04.137052 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c1b3bc9-3782-474e-a90c-86f0ba86fa6a-kube-api-access-7qkj7" (OuterVolumeSpecName: "kube-api-access-7qkj7") pod "2c1b3bc9-3782-474e-a90c-86f0ba86fa6a" (UID: "2c1b3bc9-3782-474e-a90c-86f0ba86fa6a"). InnerVolumeSpecName "kube-api-access-7qkj7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:14:04 crc kubenswrapper[5120]: I0122 12:14:04.226980 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7qkj7\" (UniqueName: \"kubernetes.io/projected/2c1b3bc9-3782-474e-a90c-86f0ba86fa6a-kube-api-access-7qkj7\") on node \"crc\" DevicePath \"\"" Jan 22 12:14:04 crc kubenswrapper[5120]: I0122 12:14:04.814545 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484734-7jmnm" event={"ID":"2c1b3bc9-3782-474e-a90c-86f0ba86fa6a","Type":"ContainerDied","Data":"457680467a87c168acb336fde84c6785d065ccc55d5d03b07ac77578c2019e6f"} Jan 22 12:14:04 crc kubenswrapper[5120]: I0122 12:14:04.814593 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="457680467a87c168acb336fde84c6785d065ccc55d5d03b07ac77578c2019e6f" Jan 22 12:14:04 crc kubenswrapper[5120]: I0122 12:14:04.814684 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484734-7jmnm" Jan 22 12:14:05 crc kubenswrapper[5120]: I0122 12:14:05.113809 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29484728-j8w4j"] Jan 22 12:14:05 crc kubenswrapper[5120]: I0122 12:14:05.119756 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29484728-j8w4j"] Jan 22 12:14:05 crc kubenswrapper[5120]: I0122 12:14:05.583243 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba296aaf-56d0-49e4-b647-aae80f6fbd52" path="/var/lib/kubelet/pods/ba296aaf-56d0-49e4-b647-aae80f6fbd52/volumes" Jan 22 12:14:13 crc kubenswrapper[5120]: I0122 12:14:13.581459 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-zgrdr"] Jan 22 12:14:13 crc kubenswrapper[5120]: I0122 12:14:13.582932 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2c1b3bc9-3782-474e-a90c-86f0ba86fa6a" containerName="oc" Jan 22 12:14:13 crc kubenswrapper[5120]: I0122 12:14:13.582978 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c1b3bc9-3782-474e-a90c-86f0ba86fa6a" containerName="oc" Jan 22 12:14:13 crc kubenswrapper[5120]: I0122 12:14:13.583149 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="2c1b3bc9-3782-474e-a90c-86f0ba86fa6a" containerName="oc" Jan 22 12:14:13 crc kubenswrapper[5120]: I0122 12:14:13.598780 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-zgrdr"] Jan 22 12:14:13 crc kubenswrapper[5120]: I0122 12:14:13.598939 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-zgrdr" Jan 22 12:14:13 crc kubenswrapper[5120]: I0122 12:14:13.601755 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-users\"" Jan 22 12:14:13 crc kubenswrapper[5120]: I0122 12:14:13.602413 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-inter-router-credentials\"" Jan 22 12:14:13 crc kubenswrapper[5120]: I0122 12:14:13.602457 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-openstack-credentials\"" Jan 22 12:14:13 crc kubenswrapper[5120]: I0122 12:14:13.602529 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-dockercfg-2nlrp\"" Jan 22 12:14:13 crc kubenswrapper[5120]: I0122 12:14:13.602648 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-interconnect-sasl-config\"" Jan 22 12:14:13 crc kubenswrapper[5120]: I0122 12:14:13.603057 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-inter-router-ca\"" Jan 22 12:14:13 crc kubenswrapper[5120]: I0122 12:14:13.603164 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-openstack-ca\"" Jan 22 12:14:13 crc kubenswrapper[5120]: I0122 12:14:13.696540 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/f4812e83-6f17-4bad-8aaa-1521eb0b590f-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-zgrdr\" (UID: \"f4812e83-6f17-4bad-8aaa-1521eb0b590f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zgrdr" Jan 22 12:14:13 crc kubenswrapper[5120]: I0122 12:14:13.696614 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/f4812e83-6f17-4bad-8aaa-1521eb0b590f-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-zgrdr\" (UID: \"f4812e83-6f17-4bad-8aaa-1521eb0b590f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zgrdr" Jan 22 12:14:13 crc kubenswrapper[5120]: I0122 12:14:13.696649 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/f4812e83-6f17-4bad-8aaa-1521eb0b590f-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-zgrdr\" (UID: \"f4812e83-6f17-4bad-8aaa-1521eb0b590f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zgrdr" Jan 22 12:14:13 crc kubenswrapper[5120]: I0122 12:14:13.696669 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/f4812e83-6f17-4bad-8aaa-1521eb0b590f-sasl-config\") pod \"default-interconnect-55bf8d5cb-zgrdr\" (UID: \"f4812e83-6f17-4bad-8aaa-1521eb0b590f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zgrdr" Jan 22 12:14:13 crc kubenswrapper[5120]: I0122 12:14:13.696761 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/f4812e83-6f17-4bad-8aaa-1521eb0b590f-sasl-users\") pod \"default-interconnect-55bf8d5cb-zgrdr\" (UID: \"f4812e83-6f17-4bad-8aaa-1521eb0b590f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zgrdr" Jan 22 12:14:13 crc kubenswrapper[5120]: I0122 12:14:13.696832 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/f4812e83-6f17-4bad-8aaa-1521eb0b590f-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-zgrdr\" (UID: \"f4812e83-6f17-4bad-8aaa-1521eb0b590f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zgrdr" Jan 22 12:14:13 crc kubenswrapper[5120]: I0122 12:14:13.696873 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmtv5\" (UniqueName: \"kubernetes.io/projected/f4812e83-6f17-4bad-8aaa-1521eb0b590f-kube-api-access-jmtv5\") pod \"default-interconnect-55bf8d5cb-zgrdr\" (UID: \"f4812e83-6f17-4bad-8aaa-1521eb0b590f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zgrdr" Jan 22 12:14:13 crc kubenswrapper[5120]: I0122 12:14:13.798783 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/f4812e83-6f17-4bad-8aaa-1521eb0b590f-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-zgrdr\" (UID: \"f4812e83-6f17-4bad-8aaa-1521eb0b590f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zgrdr" Jan 22 12:14:13 crc kubenswrapper[5120]: I0122 12:14:13.798851 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/f4812e83-6f17-4bad-8aaa-1521eb0b590f-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-zgrdr\" (UID: \"f4812e83-6f17-4bad-8aaa-1521eb0b590f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zgrdr" Jan 22 12:14:13 crc kubenswrapper[5120]: I0122 12:14:13.798879 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/f4812e83-6f17-4bad-8aaa-1521eb0b590f-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-zgrdr\" (UID: \"f4812e83-6f17-4bad-8aaa-1521eb0b590f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zgrdr" Jan 22 12:14:13 crc kubenswrapper[5120]: I0122 12:14:13.798906 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/f4812e83-6f17-4bad-8aaa-1521eb0b590f-sasl-config\") pod \"default-interconnect-55bf8d5cb-zgrdr\" (UID: \"f4812e83-6f17-4bad-8aaa-1521eb0b590f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zgrdr" Jan 22 12:14:13 crc kubenswrapper[5120]: I0122 12:14:13.799039 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/f4812e83-6f17-4bad-8aaa-1521eb0b590f-sasl-users\") pod \"default-interconnect-55bf8d5cb-zgrdr\" (UID: \"f4812e83-6f17-4bad-8aaa-1521eb0b590f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zgrdr" Jan 22 12:14:13 crc kubenswrapper[5120]: I0122 12:14:13.799093 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/f4812e83-6f17-4bad-8aaa-1521eb0b590f-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-zgrdr\" (UID: \"f4812e83-6f17-4bad-8aaa-1521eb0b590f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zgrdr" Jan 22 12:14:13 crc kubenswrapper[5120]: I0122 12:14:13.799130 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jmtv5\" (UniqueName: \"kubernetes.io/projected/f4812e83-6f17-4bad-8aaa-1521eb0b590f-kube-api-access-jmtv5\") pod \"default-interconnect-55bf8d5cb-zgrdr\" (UID: \"f4812e83-6f17-4bad-8aaa-1521eb0b590f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zgrdr" Jan 22 12:14:13 crc kubenswrapper[5120]: I0122 12:14:13.800546 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/f4812e83-6f17-4bad-8aaa-1521eb0b590f-sasl-config\") pod \"default-interconnect-55bf8d5cb-zgrdr\" (UID: \"f4812e83-6f17-4bad-8aaa-1521eb0b590f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zgrdr" Jan 22 12:14:13 crc kubenswrapper[5120]: I0122 12:14:13.809156 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/f4812e83-6f17-4bad-8aaa-1521eb0b590f-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-zgrdr\" (UID: \"f4812e83-6f17-4bad-8aaa-1521eb0b590f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zgrdr" Jan 22 12:14:13 crc kubenswrapper[5120]: I0122 12:14:13.809206 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/f4812e83-6f17-4bad-8aaa-1521eb0b590f-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-zgrdr\" (UID: \"f4812e83-6f17-4bad-8aaa-1521eb0b590f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zgrdr" Jan 22 12:14:13 crc kubenswrapper[5120]: I0122 12:14:13.810267 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/f4812e83-6f17-4bad-8aaa-1521eb0b590f-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-zgrdr\" (UID: \"f4812e83-6f17-4bad-8aaa-1521eb0b590f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zgrdr" Jan 22 12:14:13 crc kubenswrapper[5120]: I0122 12:14:13.811921 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/f4812e83-6f17-4bad-8aaa-1521eb0b590f-sasl-users\") pod \"default-interconnect-55bf8d5cb-zgrdr\" (UID: \"f4812e83-6f17-4bad-8aaa-1521eb0b590f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zgrdr" Jan 22 12:14:13 crc kubenswrapper[5120]: I0122 12:14:13.817499 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/f4812e83-6f17-4bad-8aaa-1521eb0b590f-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-zgrdr\" (UID: \"f4812e83-6f17-4bad-8aaa-1521eb0b590f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zgrdr" Jan 22 12:14:13 crc kubenswrapper[5120]: I0122 12:14:13.820248 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmtv5\" (UniqueName: \"kubernetes.io/projected/f4812e83-6f17-4bad-8aaa-1521eb0b590f-kube-api-access-jmtv5\") pod \"default-interconnect-55bf8d5cb-zgrdr\" (UID: \"f4812e83-6f17-4bad-8aaa-1521eb0b590f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zgrdr" Jan 22 12:14:13 crc kubenswrapper[5120]: I0122 12:14:13.922461 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-zgrdr" Jan 22 12:14:14 crc kubenswrapper[5120]: I0122 12:14:14.350486 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-zgrdr"] Jan 22 12:14:14 crc kubenswrapper[5120]: I0122 12:14:14.893975 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-zgrdr" event={"ID":"f4812e83-6f17-4bad-8aaa-1521eb0b590f","Type":"ContainerStarted","Data":"11628814cb12f23bc6c37dd57728341ba4c21021b5a9ed812a9f0c32aac8439a"} Jan 22 12:14:19 crc kubenswrapper[5120]: I0122 12:14:19.937301 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-zgrdr" event={"ID":"f4812e83-6f17-4bad-8aaa-1521eb0b590f","Type":"ContainerStarted","Data":"0d4764f8cb2010a2330da137cf47631a4f97251072e950235bdfec5d58620ae3"} Jan 22 12:14:19 crc kubenswrapper[5120]: I0122 12:14:19.969159 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-55bf8d5cb-zgrdr" podStartSLOduration=2.009468003 podStartE2EDuration="6.969121801s" podCreationTimestamp="2026-01-22 12:14:13 +0000 UTC" firstStartedPulling="2026-01-22 12:14:14.356340281 +0000 UTC m=+1589.100288622" lastFinishedPulling="2026-01-22 12:14:19.315994079 +0000 UTC m=+1594.059942420" observedRunningTime="2026-01-22 12:14:19.965188986 +0000 UTC m=+1594.709137347" watchObservedRunningTime="2026-01-22 12:14:19.969121801 +0000 UTC m=+1594.713070162" Jan 22 12:14:24 crc kubenswrapper[5120]: I0122 12:14:24.808018 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-default-0"] Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.620133 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.620498 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.625622 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default-web-config\"" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.625820 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-default-rulefiles-1\"" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.625637 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-session-secret\"" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.626165 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-prometheus-proxy-tls\"" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.628118 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default-tls-assets-0\"" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.628199 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"serving-certs-ca-bundle\"" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.630017 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-default-rulefiles-0\"" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.630163 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-stf-dockercfg-r88wg\"" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.630597 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-default-rulefiles-2\"" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.633987 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default\"" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.713997 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/af3a73d7-3578-4530-9916-0c3613d55591-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"af3a73d7-3578-4530-9916-0c3613d55591\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.714088 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/af3a73d7-3578-4530-9916-0c3613d55591-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"af3a73d7-3578-4530-9916-0c3613d55591\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.714158 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/af3a73d7-3578-4530-9916-0c3613d55591-tls-assets\") pod \"prometheus-default-0\" (UID: \"af3a73d7-3578-4530-9916-0c3613d55591\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.714200 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/af3a73d7-3578-4530-9916-0c3613d55591-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"af3a73d7-3578-4530-9916-0c3613d55591\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.714267 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/af3a73d7-3578-4530-9916-0c3613d55591-web-config\") pod \"prometheus-default-0\" (UID: \"af3a73d7-3578-4530-9916-0c3613d55591\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.714385 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/af3a73d7-3578-4530-9916-0c3613d55591-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"af3a73d7-3578-4530-9916-0c3613d55591\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.714423 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-84a3aaf7-fa67-41ac-a74f-0e48eff03333\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-84a3aaf7-fa67-41ac-a74f-0e48eff03333\") pod \"prometheus-default-0\" (UID: \"af3a73d7-3578-4530-9916-0c3613d55591\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.714507 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/af3a73d7-3578-4530-9916-0c3613d55591-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"af3a73d7-3578-4530-9916-0c3613d55591\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.714554 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/af3a73d7-3578-4530-9916-0c3613d55591-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"af3a73d7-3578-4530-9916-0c3613d55591\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.714603 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/af3a73d7-3578-4530-9916-0c3613d55591-config-out\") pod \"prometheus-default-0\" (UID: \"af3a73d7-3578-4530-9916-0c3613d55591\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.714673 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6xz5\" (UniqueName: \"kubernetes.io/projected/af3a73d7-3578-4530-9916-0c3613d55591-kube-api-access-j6xz5\") pod \"prometheus-default-0\" (UID: \"af3a73d7-3578-4530-9916-0c3613d55591\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.714737 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/af3a73d7-3578-4530-9916-0c3613d55591-config\") pod \"prometheus-default-0\" (UID: \"af3a73d7-3578-4530-9916-0c3613d55591\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.816697 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/af3a73d7-3578-4530-9916-0c3613d55591-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"af3a73d7-3578-4530-9916-0c3613d55591\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.816812 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-84a3aaf7-fa67-41ac-a74f-0e48eff03333\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-84a3aaf7-fa67-41ac-a74f-0e48eff03333\") pod \"prometheus-default-0\" (UID: \"af3a73d7-3578-4530-9916-0c3613d55591\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.816859 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/af3a73d7-3578-4530-9916-0c3613d55591-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"af3a73d7-3578-4530-9916-0c3613d55591\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.816904 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/af3a73d7-3578-4530-9916-0c3613d55591-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"af3a73d7-3578-4530-9916-0c3613d55591\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.817164 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/af3a73d7-3578-4530-9916-0c3613d55591-config-out\") pod \"prometheus-default-0\" (UID: \"af3a73d7-3578-4530-9916-0c3613d55591\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.817333 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-j6xz5\" (UniqueName: \"kubernetes.io/projected/af3a73d7-3578-4530-9916-0c3613d55591-kube-api-access-j6xz5\") pod \"prometheus-default-0\" (UID: \"af3a73d7-3578-4530-9916-0c3613d55591\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.817387 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/af3a73d7-3578-4530-9916-0c3613d55591-config\") pod \"prometheus-default-0\" (UID: \"af3a73d7-3578-4530-9916-0c3613d55591\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.817537 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/af3a73d7-3578-4530-9916-0c3613d55591-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"af3a73d7-3578-4530-9916-0c3613d55591\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.817581 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/af3a73d7-3578-4530-9916-0c3613d55591-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"af3a73d7-3578-4530-9916-0c3613d55591\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.818171 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/af3a73d7-3578-4530-9916-0c3613d55591-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"af3a73d7-3578-4530-9916-0c3613d55591\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.818240 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/af3a73d7-3578-4530-9916-0c3613d55591-tls-assets\") pod \"prometheus-default-0\" (UID: \"af3a73d7-3578-4530-9916-0c3613d55591\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.818281 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/af3a73d7-3578-4530-9916-0c3613d55591-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"af3a73d7-3578-4530-9916-0c3613d55591\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.818357 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/af3a73d7-3578-4530-9916-0c3613d55591-web-config\") pod \"prometheus-default-0\" (UID: \"af3a73d7-3578-4530-9916-0c3613d55591\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.818369 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/af3a73d7-3578-4530-9916-0c3613d55591-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"af3a73d7-3578-4530-9916-0c3613d55591\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.818437 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/af3a73d7-3578-4530-9916-0c3613d55591-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"af3a73d7-3578-4530-9916-0c3613d55591\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:14:25 crc kubenswrapper[5120]: E0122 12:14:25.818912 5120 secret.go:189] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Jan 22 12:14:25 crc kubenswrapper[5120]: E0122 12:14:25.819098 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af3a73d7-3578-4530-9916-0c3613d55591-secret-default-prometheus-proxy-tls podName:af3a73d7-3578-4530-9916-0c3613d55591 nodeName:}" failed. No retries permitted until 2026-01-22 12:14:26.31906456 +0000 UTC m=+1601.063012921 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/af3a73d7-3578-4530-9916-0c3613d55591-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "af3a73d7-3578-4530-9916-0c3613d55591") : secret "default-prometheus-proxy-tls" not found Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.819127 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/af3a73d7-3578-4530-9916-0c3613d55591-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"af3a73d7-3578-4530-9916-0c3613d55591\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.825717 5120 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.825766 5120 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-84a3aaf7-fa67-41ac-a74f-0e48eff03333\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-84a3aaf7-fa67-41ac-a74f-0e48eff03333\") pod \"prometheus-default-0\" (UID: \"af3a73d7-3578-4530-9916-0c3613d55591\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/516a3b7b844e4c3dd8240e8a8a3b1694cea78fced0a6ec1a814e8c4102adf5e0/globalmount\"" pod="service-telemetry/prometheus-default-0" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.825951 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/af3a73d7-3578-4530-9916-0c3613d55591-tls-assets\") pod \"prometheus-default-0\" (UID: \"af3a73d7-3578-4530-9916-0c3613d55591\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.829245 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/af3a73d7-3578-4530-9916-0c3613d55591-config-out\") pod \"prometheus-default-0\" (UID: \"af3a73d7-3578-4530-9916-0c3613d55591\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.830210 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/af3a73d7-3578-4530-9916-0c3613d55591-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"af3a73d7-3578-4530-9916-0c3613d55591\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.835741 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/af3a73d7-3578-4530-9916-0c3613d55591-config\") pod \"prometheus-default-0\" (UID: \"af3a73d7-3578-4530-9916-0c3613d55591\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.849797 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/af3a73d7-3578-4530-9916-0c3613d55591-web-config\") pod \"prometheus-default-0\" (UID: \"af3a73d7-3578-4530-9916-0c3613d55591\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.861036 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6xz5\" (UniqueName: \"kubernetes.io/projected/af3a73d7-3578-4530-9916-0c3613d55591-kube-api-access-j6xz5\") pod \"prometheus-default-0\" (UID: \"af3a73d7-3578-4530-9916-0c3613d55591\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.868332 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-84a3aaf7-fa67-41ac-a74f-0e48eff03333\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-84a3aaf7-fa67-41ac-a74f-0e48eff03333\") pod \"prometheus-default-0\" (UID: \"af3a73d7-3578-4530-9916-0c3613d55591\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:14:26 crc kubenswrapper[5120]: I0122 12:14:26.329220 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/af3a73d7-3578-4530-9916-0c3613d55591-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"af3a73d7-3578-4530-9916-0c3613d55591\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:14:26 crc kubenswrapper[5120]: I0122 12:14:26.336241 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/af3a73d7-3578-4530-9916-0c3613d55591-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"af3a73d7-3578-4530-9916-0c3613d55591\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:14:26 crc kubenswrapper[5120]: I0122 12:14:26.544735 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Jan 22 12:14:26 crc kubenswrapper[5120]: I0122 12:14:26.812667 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Jan 22 12:14:26 crc kubenswrapper[5120]: I0122 12:14:26.999416 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"af3a73d7-3578-4530-9916-0c3613d55591","Type":"ContainerStarted","Data":"267f7596e532eb745728638b48080b207a404ea82edb46b16da5fa5634680e48"} Jan 22 12:14:32 crc kubenswrapper[5120]: I0122 12:14:32.046078 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"af3a73d7-3578-4530-9916-0c3613d55591","Type":"ContainerStarted","Data":"6b5ede4cc3e631a410cc904199c1aad8fe648776f622956eebb0434b7ec3fd11"} Jan 22 12:14:35 crc kubenswrapper[5120]: I0122 12:14:35.494220 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-snmp-webhook-694dc457d5-4xz7b"] Jan 22 12:14:35 crc kubenswrapper[5120]: I0122 12:14:35.558632 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-694dc457d5-4xz7b"] Jan 22 12:14:35 crc kubenswrapper[5120]: I0122 12:14:35.558872 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-694dc457d5-4xz7b" Jan 22 12:14:35 crc kubenswrapper[5120]: I0122 12:14:35.684310 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxmzc\" (UniqueName: \"kubernetes.io/projected/cb40028b-f955-4b75-b559-a1c4ec5c9256-kube-api-access-rxmzc\") pod \"default-snmp-webhook-694dc457d5-4xz7b\" (UID: \"cb40028b-f955-4b75-b559-a1c4ec5c9256\") " pod="service-telemetry/default-snmp-webhook-694dc457d5-4xz7b" Jan 22 12:14:35 crc kubenswrapper[5120]: I0122 12:14:35.786075 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rxmzc\" (UniqueName: \"kubernetes.io/projected/cb40028b-f955-4b75-b559-a1c4ec5c9256-kube-api-access-rxmzc\") pod \"default-snmp-webhook-694dc457d5-4xz7b\" (UID: \"cb40028b-f955-4b75-b559-a1c4ec5c9256\") " pod="service-telemetry/default-snmp-webhook-694dc457d5-4xz7b" Jan 22 12:14:35 crc kubenswrapper[5120]: I0122 12:14:35.814033 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxmzc\" (UniqueName: \"kubernetes.io/projected/cb40028b-f955-4b75-b559-a1c4ec5c9256-kube-api-access-rxmzc\") pod \"default-snmp-webhook-694dc457d5-4xz7b\" (UID: \"cb40028b-f955-4b75-b559-a1c4ec5c9256\") " pod="service-telemetry/default-snmp-webhook-694dc457d5-4xz7b" Jan 22 12:14:35 crc kubenswrapper[5120]: I0122 12:14:35.889259 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-694dc457d5-4xz7b" Jan 22 12:14:36 crc kubenswrapper[5120]: I0122 12:14:36.187561 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-694dc457d5-4xz7b"] Jan 22 12:14:36 crc kubenswrapper[5120]: W0122 12:14:36.215219 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb40028b_f955_4b75_b559_a1c4ec5c9256.slice/crio-6846a61470def7ba51d45ab323cc5d3ff77328384f1daf7ea9d5f35c9d435fc1 WatchSource:0}: Error finding container 6846a61470def7ba51d45ab323cc5d3ff77328384f1daf7ea9d5f35c9d435fc1: Status 404 returned error can't find the container with id 6846a61470def7ba51d45ab323cc5d3ff77328384f1daf7ea9d5f35c9d435fc1 Jan 22 12:14:37 crc kubenswrapper[5120]: I0122 12:14:37.091050 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-694dc457d5-4xz7b" event={"ID":"cb40028b-f955-4b75-b559-a1c4ec5c9256","Type":"ContainerStarted","Data":"6846a61470def7ba51d45ab323cc5d3ff77328384f1daf7ea9d5f35c9d435fc1"} Jan 22 12:14:38 crc kubenswrapper[5120]: I0122 12:14:38.874833 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/alertmanager-default-0"] Jan 22 12:14:38 crc kubenswrapper[5120]: I0122 12:14:38.898972 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Jan 22 12:14:38 crc kubenswrapper[5120]: I0122 12:14:38.899245 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Jan 22 12:14:38 crc kubenswrapper[5120]: I0122 12:14:38.904500 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-tls-assets-0\"" Jan 22 12:14:38 crc kubenswrapper[5120]: I0122 12:14:38.904527 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-stf-dockercfg-csp9t\"" Jan 22 12:14:38 crc kubenswrapper[5120]: I0122 12:14:38.904556 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-cluster-tls-config\"" Jan 22 12:14:38 crc kubenswrapper[5120]: I0122 12:14:38.904505 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-web-config\"" Jan 22 12:14:38 crc kubenswrapper[5120]: I0122 12:14:38.904942 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-alertmanager-proxy-tls\"" Jan 22 12:14:38 crc kubenswrapper[5120]: I0122 12:14:38.905095 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-generated\"" Jan 22 12:14:38 crc kubenswrapper[5120]: I0122 12:14:38.971646 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/88fc8b5e-6a79-414c-8a72-7447f8db3056-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"88fc8b5e-6a79-414c-8a72-7447f8db3056\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:14:38 crc kubenswrapper[5120]: I0122 12:14:38.971711 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/88fc8b5e-6a79-414c-8a72-7447f8db3056-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"88fc8b5e-6a79-414c-8a72-7447f8db3056\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:14:38 crc kubenswrapper[5120]: I0122 12:14:38.971744 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0659fa00-6f2f-4e8b-a324-2daff3f775a1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0659fa00-6f2f-4e8b-a324-2daff3f775a1\") pod \"alertmanager-default-0\" (UID: \"88fc8b5e-6a79-414c-8a72-7447f8db3056\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:14:38 crc kubenswrapper[5120]: I0122 12:14:38.971809 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/88fc8b5e-6a79-414c-8a72-7447f8db3056-web-config\") pod \"alertmanager-default-0\" (UID: \"88fc8b5e-6a79-414c-8a72-7447f8db3056\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:14:38 crc kubenswrapper[5120]: I0122 12:14:38.971850 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/88fc8b5e-6a79-414c-8a72-7447f8db3056-config-volume\") pod \"alertmanager-default-0\" (UID: \"88fc8b5e-6a79-414c-8a72-7447f8db3056\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:14:38 crc kubenswrapper[5120]: I0122 12:14:38.971869 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/88fc8b5e-6a79-414c-8a72-7447f8db3056-config-out\") pod \"alertmanager-default-0\" (UID: \"88fc8b5e-6a79-414c-8a72-7447f8db3056\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:14:38 crc kubenswrapper[5120]: I0122 12:14:38.971884 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vswvr\" (UniqueName: \"kubernetes.io/projected/88fc8b5e-6a79-414c-8a72-7447f8db3056-kube-api-access-vswvr\") pod \"alertmanager-default-0\" (UID: \"88fc8b5e-6a79-414c-8a72-7447f8db3056\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:14:38 crc kubenswrapper[5120]: I0122 12:14:38.971908 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/88fc8b5e-6a79-414c-8a72-7447f8db3056-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"88fc8b5e-6a79-414c-8a72-7447f8db3056\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:14:38 crc kubenswrapper[5120]: I0122 12:14:38.971929 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/88fc8b5e-6a79-414c-8a72-7447f8db3056-tls-assets\") pod \"alertmanager-default-0\" (UID: \"88fc8b5e-6a79-414c-8a72-7447f8db3056\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:14:39 crc kubenswrapper[5120]: I0122 12:14:39.073367 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/88fc8b5e-6a79-414c-8a72-7447f8db3056-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"88fc8b5e-6a79-414c-8a72-7447f8db3056\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:14:39 crc kubenswrapper[5120]: I0122 12:14:39.073432 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/88fc8b5e-6a79-414c-8a72-7447f8db3056-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"88fc8b5e-6a79-414c-8a72-7447f8db3056\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:14:39 crc kubenswrapper[5120]: I0122 12:14:39.073509 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-0659fa00-6f2f-4e8b-a324-2daff3f775a1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0659fa00-6f2f-4e8b-a324-2daff3f775a1\") pod \"alertmanager-default-0\" (UID: \"88fc8b5e-6a79-414c-8a72-7447f8db3056\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:14:39 crc kubenswrapper[5120]: I0122 12:14:39.075589 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/88fc8b5e-6a79-414c-8a72-7447f8db3056-web-config\") pod \"alertmanager-default-0\" (UID: \"88fc8b5e-6a79-414c-8a72-7447f8db3056\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:14:39 crc kubenswrapper[5120]: I0122 12:14:39.075657 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/88fc8b5e-6a79-414c-8a72-7447f8db3056-config-volume\") pod \"alertmanager-default-0\" (UID: \"88fc8b5e-6a79-414c-8a72-7447f8db3056\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:14:39 crc kubenswrapper[5120]: I0122 12:14:39.075691 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/88fc8b5e-6a79-414c-8a72-7447f8db3056-config-out\") pod \"alertmanager-default-0\" (UID: \"88fc8b5e-6a79-414c-8a72-7447f8db3056\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:14:39 crc kubenswrapper[5120]: I0122 12:14:39.075717 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vswvr\" (UniqueName: \"kubernetes.io/projected/88fc8b5e-6a79-414c-8a72-7447f8db3056-kube-api-access-vswvr\") pod \"alertmanager-default-0\" (UID: \"88fc8b5e-6a79-414c-8a72-7447f8db3056\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:14:39 crc kubenswrapper[5120]: I0122 12:14:39.075749 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/88fc8b5e-6a79-414c-8a72-7447f8db3056-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"88fc8b5e-6a79-414c-8a72-7447f8db3056\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:14:39 crc kubenswrapper[5120]: I0122 12:14:39.075783 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/88fc8b5e-6a79-414c-8a72-7447f8db3056-tls-assets\") pod \"alertmanager-default-0\" (UID: \"88fc8b5e-6a79-414c-8a72-7447f8db3056\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:14:39 crc kubenswrapper[5120]: E0122 12:14:39.076407 5120 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Jan 22 12:14:39 crc kubenswrapper[5120]: E0122 12:14:39.076559 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/88fc8b5e-6a79-414c-8a72-7447f8db3056-secret-default-alertmanager-proxy-tls podName:88fc8b5e-6a79-414c-8a72-7447f8db3056 nodeName:}" failed. No retries permitted until 2026-01-22 12:14:39.576526656 +0000 UTC m=+1614.320474997 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/88fc8b5e-6a79-414c-8a72-7447f8db3056-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "88fc8b5e-6a79-414c-8a72-7447f8db3056") : secret "default-alertmanager-proxy-tls" not found Jan 22 12:14:39 crc kubenswrapper[5120]: I0122 12:14:39.084528 5120 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 22 12:14:39 crc kubenswrapper[5120]: I0122 12:14:39.084592 5120 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-0659fa00-6f2f-4e8b-a324-2daff3f775a1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0659fa00-6f2f-4e8b-a324-2daff3f775a1\") pod \"alertmanager-default-0\" (UID: \"88fc8b5e-6a79-414c-8a72-7447f8db3056\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/993d84bf7be45a27faf02d688ca3124bd0e06ed43b7298b0f65b55e404201a0b/globalmount\"" pod="service-telemetry/alertmanager-default-0" Jan 22 12:14:39 crc kubenswrapper[5120]: I0122 12:14:39.086561 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/88fc8b5e-6a79-414c-8a72-7447f8db3056-web-config\") pod \"alertmanager-default-0\" (UID: \"88fc8b5e-6a79-414c-8a72-7447f8db3056\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:14:39 crc kubenswrapper[5120]: I0122 12:14:39.095773 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/88fc8b5e-6a79-414c-8a72-7447f8db3056-tls-assets\") pod \"alertmanager-default-0\" (UID: \"88fc8b5e-6a79-414c-8a72-7447f8db3056\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:14:39 crc kubenswrapper[5120]: I0122 12:14:39.096080 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/88fc8b5e-6a79-414c-8a72-7447f8db3056-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"88fc8b5e-6a79-414c-8a72-7447f8db3056\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:14:39 crc kubenswrapper[5120]: I0122 12:14:39.098642 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vswvr\" (UniqueName: \"kubernetes.io/projected/88fc8b5e-6a79-414c-8a72-7447f8db3056-kube-api-access-vswvr\") pod \"alertmanager-default-0\" (UID: \"88fc8b5e-6a79-414c-8a72-7447f8db3056\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:14:39 crc kubenswrapper[5120]: I0122 12:14:39.099094 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/88fc8b5e-6a79-414c-8a72-7447f8db3056-config-volume\") pod \"alertmanager-default-0\" (UID: \"88fc8b5e-6a79-414c-8a72-7447f8db3056\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:14:39 crc kubenswrapper[5120]: I0122 12:14:39.103308 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/88fc8b5e-6a79-414c-8a72-7447f8db3056-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"88fc8b5e-6a79-414c-8a72-7447f8db3056\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:14:39 crc kubenswrapper[5120]: I0122 12:14:39.104257 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/88fc8b5e-6a79-414c-8a72-7447f8db3056-config-out\") pod \"alertmanager-default-0\" (UID: \"88fc8b5e-6a79-414c-8a72-7447f8db3056\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:14:39 crc kubenswrapper[5120]: I0122 12:14:39.110857 5120 generic.go:358] "Generic (PLEG): container finished" podID="af3a73d7-3578-4530-9916-0c3613d55591" containerID="6b5ede4cc3e631a410cc904199c1aad8fe648776f622956eebb0434b7ec3fd11" exitCode=0 Jan 22 12:14:39 crc kubenswrapper[5120]: I0122 12:14:39.110912 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"af3a73d7-3578-4530-9916-0c3613d55591","Type":"ContainerDied","Data":"6b5ede4cc3e631a410cc904199c1aad8fe648776f622956eebb0434b7ec3fd11"} Jan 22 12:14:39 crc kubenswrapper[5120]: I0122 12:14:39.123170 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-0659fa00-6f2f-4e8b-a324-2daff3f775a1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0659fa00-6f2f-4e8b-a324-2daff3f775a1\") pod \"alertmanager-default-0\" (UID: \"88fc8b5e-6a79-414c-8a72-7447f8db3056\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:14:39 crc kubenswrapper[5120]: I0122 12:14:39.584864 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/88fc8b5e-6a79-414c-8a72-7447f8db3056-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"88fc8b5e-6a79-414c-8a72-7447f8db3056\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:14:39 crc kubenswrapper[5120]: E0122 12:14:39.585498 5120 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Jan 22 12:14:39 crc kubenswrapper[5120]: E0122 12:14:39.585615 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/88fc8b5e-6a79-414c-8a72-7447f8db3056-secret-default-alertmanager-proxy-tls podName:88fc8b5e-6a79-414c-8a72-7447f8db3056 nodeName:}" failed. No retries permitted until 2026-01-22 12:14:40.585589558 +0000 UTC m=+1615.329538119 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/88fc8b5e-6a79-414c-8a72-7447f8db3056-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "88fc8b5e-6a79-414c-8a72-7447f8db3056") : secret "default-alertmanager-proxy-tls" not found Jan 22 12:14:40 crc kubenswrapper[5120]: I0122 12:14:40.603592 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/88fc8b5e-6a79-414c-8a72-7447f8db3056-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"88fc8b5e-6a79-414c-8a72-7447f8db3056\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:14:40 crc kubenswrapper[5120]: E0122 12:14:40.603939 5120 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Jan 22 12:14:40 crc kubenswrapper[5120]: E0122 12:14:40.604568 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/88fc8b5e-6a79-414c-8a72-7447f8db3056-secret-default-alertmanager-proxy-tls podName:88fc8b5e-6a79-414c-8a72-7447f8db3056 nodeName:}" failed. No retries permitted until 2026-01-22 12:14:42.604531404 +0000 UTC m=+1617.348479755 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/88fc8b5e-6a79-414c-8a72-7447f8db3056-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "88fc8b5e-6a79-414c-8a72-7447f8db3056") : secret "default-alertmanager-proxy-tls" not found Jan 22 12:14:42 crc kubenswrapper[5120]: I0122 12:14:42.642393 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/88fc8b5e-6a79-414c-8a72-7447f8db3056-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"88fc8b5e-6a79-414c-8a72-7447f8db3056\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:14:42 crc kubenswrapper[5120]: E0122 12:14:42.643908 5120 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Jan 22 12:14:42 crc kubenswrapper[5120]: E0122 12:14:42.644160 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/88fc8b5e-6a79-414c-8a72-7447f8db3056-secret-default-alertmanager-proxy-tls podName:88fc8b5e-6a79-414c-8a72-7447f8db3056 nodeName:}" failed. No retries permitted until 2026-01-22 12:14:46.644135537 +0000 UTC m=+1621.388083888 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/88fc8b5e-6a79-414c-8a72-7447f8db3056-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "88fc8b5e-6a79-414c-8a72-7447f8db3056") : secret "default-alertmanager-proxy-tls" not found Jan 22 12:14:46 crc kubenswrapper[5120]: I0122 12:14:46.729094 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/88fc8b5e-6a79-414c-8a72-7447f8db3056-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"88fc8b5e-6a79-414c-8a72-7447f8db3056\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:14:46 crc kubenswrapper[5120]: I0122 12:14:46.738283 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/88fc8b5e-6a79-414c-8a72-7447f8db3056-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"88fc8b5e-6a79-414c-8a72-7447f8db3056\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:14:47 crc kubenswrapper[5120]: I0122 12:14:47.023603 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-stf-dockercfg-csp9t\"" Jan 22 12:14:47 crc kubenswrapper[5120]: I0122 12:14:47.031661 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Jan 22 12:14:47 crc kubenswrapper[5120]: I0122 12:14:47.382539 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Jan 22 12:14:47 crc kubenswrapper[5120]: W0122 12:14:47.389616 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88fc8b5e_6a79_414c_8a72_7447f8db3056.slice/crio-bc00615fe587010e2bf03b7cc704c63f81763d12e755aa641318a9c23b19c0e3 WatchSource:0}: Error finding container bc00615fe587010e2bf03b7cc704c63f81763d12e755aa641318a9c23b19c0e3: Status 404 returned error can't find the container with id bc00615fe587010e2bf03b7cc704c63f81763d12e755aa641318a9c23b19c0e3 Jan 22 12:14:48 crc kubenswrapper[5120]: I0122 12:14:48.202816 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-694dc457d5-4xz7b" event={"ID":"cb40028b-f955-4b75-b559-a1c4ec5c9256","Type":"ContainerStarted","Data":"0d1bcaf6d02cf6d43327afa2e95c3dbc92c421661b050060877f4244b7795329"} Jan 22 12:14:48 crc kubenswrapper[5120]: I0122 12:14:48.205036 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"88fc8b5e-6a79-414c-8a72-7447f8db3056","Type":"ContainerStarted","Data":"bc00615fe587010e2bf03b7cc704c63f81763d12e755aa641318a9c23b19c0e3"} Jan 22 12:14:48 crc kubenswrapper[5120]: I0122 12:14:48.228518 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-snmp-webhook-694dc457d5-4xz7b" podStartSLOduration=1.711983881 podStartE2EDuration="13.228494865s" podCreationTimestamp="2026-01-22 12:14:35 +0000 UTC" firstStartedPulling="2026-01-22 12:14:36.219311136 +0000 UTC m=+1610.963259497" lastFinishedPulling="2026-01-22 12:14:47.73582213 +0000 UTC m=+1622.479770481" observedRunningTime="2026-01-22 12:14:48.225232836 +0000 UTC m=+1622.969181187" watchObservedRunningTime="2026-01-22 12:14:48.228494865 +0000 UTC m=+1622.972443196" Jan 22 12:14:51 crc kubenswrapper[5120]: I0122 12:14:51.238702 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"88fc8b5e-6a79-414c-8a72-7447f8db3056","Type":"ContainerStarted","Data":"30f1056fa40244c608160ccec3cf2c890121b8d494a730dc6f2221ba70fdffd2"} Jan 22 12:14:52 crc kubenswrapper[5120]: I0122 12:14:52.250536 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"af3a73d7-3578-4530-9916-0c3613d55591","Type":"ContainerStarted","Data":"b092ff5869d20f2aaf91483ecba3ef7e97ccf8e1e82ef6a6dbb4b90d4a22c378"} Jan 22 12:14:55 crc kubenswrapper[5120]: I0122 12:14:55.281028 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"af3a73d7-3578-4530-9916-0c3613d55591","Type":"ContainerStarted","Data":"7e4ddf9c82997913cbafbab529e2d0b650a371fab6ea95043271935edefc4350"} Jan 22 12:14:55 crc kubenswrapper[5120]: I0122 12:14:55.393121 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8"] Jan 22 12:14:55 crc kubenswrapper[5120]: I0122 12:14:55.457122 5120 scope.go:117] "RemoveContainer" containerID="8c734d96e4b1f47996c023313a0ce278e60832df482833ed84ccfa06214e5cc6" Jan 22 12:14:55 crc kubenswrapper[5120]: I0122 12:14:55.457765 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8" Jan 22 12:14:55 crc kubenswrapper[5120]: I0122 12:14:55.464754 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-meter-proxy-tls\"" Jan 22 12:14:55 crc kubenswrapper[5120]: I0122 12:14:55.465412 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-meter-sg-core-configmap\"" Jan 22 12:14:55 crc kubenswrapper[5120]: I0122 12:14:55.465693 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-dockercfg-xtq4h\"" Jan 22 12:14:55 crc kubenswrapper[5120]: I0122 12:14:55.465929 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-session-secret\"" Jan 22 12:14:55 crc kubenswrapper[5120]: I0122 12:14:55.480600 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8"] Jan 22 12:14:55 crc kubenswrapper[5120]: I0122 12:14:55.480740 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8\" (UID: \"d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8" Jan 22 12:14:55 crc kubenswrapper[5120]: I0122 12:14:55.480820 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8\" (UID: \"d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8" Jan 22 12:14:55 crc kubenswrapper[5120]: I0122 12:14:55.480846 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8\" (UID: \"d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8" Jan 22 12:14:55 crc kubenswrapper[5120]: I0122 12:14:55.480877 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8\" (UID: \"d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8" Jan 22 12:14:55 crc kubenswrapper[5120]: I0122 12:14:55.480985 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xvgp\" (UniqueName: \"kubernetes.io/projected/d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2-kube-api-access-9xvgp\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8\" (UID: \"d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8" Jan 22 12:14:55 crc kubenswrapper[5120]: I0122 12:14:55.582080 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9xvgp\" (UniqueName: \"kubernetes.io/projected/d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2-kube-api-access-9xvgp\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8\" (UID: \"d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8" Jan 22 12:14:55 crc kubenswrapper[5120]: I0122 12:14:55.582158 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8\" (UID: \"d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8" Jan 22 12:14:55 crc kubenswrapper[5120]: I0122 12:14:55.582215 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8\" (UID: \"d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8" Jan 22 12:14:55 crc kubenswrapper[5120]: I0122 12:14:55.582237 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8\" (UID: \"d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8" Jan 22 12:14:55 crc kubenswrapper[5120]: I0122 12:14:55.582260 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8\" (UID: \"d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8" Jan 22 12:14:55 crc kubenswrapper[5120]: I0122 12:14:55.582842 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8\" (UID: \"d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8" Jan 22 12:14:55 crc kubenswrapper[5120]: E0122 12:14:55.583011 5120 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Jan 22 12:14:55 crc kubenswrapper[5120]: E0122 12:14:55.583104 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2-default-cloud1-coll-meter-proxy-tls podName:d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2 nodeName:}" failed. No retries permitted until 2026-01-22 12:14:56.083080616 +0000 UTC m=+1630.827028957 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8" (UID: "d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2") : secret "default-cloud1-coll-meter-proxy-tls" not found Jan 22 12:14:55 crc kubenswrapper[5120]: I0122 12:14:55.584098 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8\" (UID: \"d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8" Jan 22 12:14:55 crc kubenswrapper[5120]: I0122 12:14:55.604490 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8\" (UID: \"d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8" Jan 22 12:14:55 crc kubenswrapper[5120]: I0122 12:14:55.610498 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xvgp\" (UniqueName: \"kubernetes.io/projected/d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2-kube-api-access-9xvgp\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8\" (UID: \"d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8" Jan 22 12:14:56 crc kubenswrapper[5120]: I0122 12:14:56.092061 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8\" (UID: \"d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8" Jan 22 12:14:56 crc kubenswrapper[5120]: E0122 12:14:56.092297 5120 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Jan 22 12:14:56 crc kubenswrapper[5120]: E0122 12:14:56.092377 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2-default-cloud1-coll-meter-proxy-tls podName:d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2 nodeName:}" failed. No retries permitted until 2026-01-22 12:14:57.092355984 +0000 UTC m=+1631.836304325 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8" (UID: "d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2") : secret "default-cloud1-coll-meter-proxy-tls" not found Jan 22 12:14:57 crc kubenswrapper[5120]: I0122 12:14:57.110804 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8\" (UID: \"d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8" Jan 22 12:14:57 crc kubenswrapper[5120]: I0122 12:14:57.123914 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8\" (UID: \"d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8" Jan 22 12:14:57 crc kubenswrapper[5120]: I0122 12:14:57.340319 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8" Jan 22 12:14:57 crc kubenswrapper[5120]: I0122 12:14:57.777364 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8"] Jan 22 12:14:58 crc kubenswrapper[5120]: I0122 12:14:58.897345 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f"] Jan 22 12:14:58 crc kubenswrapper[5120]: I0122 12:14:58.906414 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f" Jan 22 12:14:58 crc kubenswrapper[5120]: I0122 12:14:58.908898 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f"] Jan 22 12:14:58 crc kubenswrapper[5120]: I0122 12:14:58.909459 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-meter-sg-core-configmap\"" Jan 22 12:14:58 crc kubenswrapper[5120]: I0122 12:14:58.909872 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-meter-proxy-tls\"" Jan 22 12:14:58 crc kubenswrapper[5120]: I0122 12:14:58.940677 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/e3b00756-b775-4a1c-90b1-852a7f1712b7-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f\" (UID: \"e3b00756-b775-4a1c-90b1-852a7f1712b7\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f" Jan 22 12:14:58 crc kubenswrapper[5120]: I0122 12:14:58.940744 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4pwz\" (UniqueName: \"kubernetes.io/projected/e3b00756-b775-4a1c-90b1-852a7f1712b7-kube-api-access-h4pwz\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f\" (UID: \"e3b00756-b775-4a1c-90b1-852a7f1712b7\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f" Jan 22 12:14:58 crc kubenswrapper[5120]: I0122 12:14:58.940814 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/e3b00756-b775-4a1c-90b1-852a7f1712b7-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f\" (UID: \"e3b00756-b775-4a1c-90b1-852a7f1712b7\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f" Jan 22 12:14:58 crc kubenswrapper[5120]: I0122 12:14:58.940880 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/e3b00756-b775-4a1c-90b1-852a7f1712b7-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f\" (UID: \"e3b00756-b775-4a1c-90b1-852a7f1712b7\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f" Jan 22 12:14:58 crc kubenswrapper[5120]: I0122 12:14:58.940925 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/e3b00756-b775-4a1c-90b1-852a7f1712b7-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f\" (UID: \"e3b00756-b775-4a1c-90b1-852a7f1712b7\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f" Jan 22 12:14:59 crc kubenswrapper[5120]: I0122 12:14:59.042618 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/e3b00756-b775-4a1c-90b1-852a7f1712b7-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f\" (UID: \"e3b00756-b775-4a1c-90b1-852a7f1712b7\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f" Jan 22 12:14:59 crc kubenswrapper[5120]: I0122 12:14:59.043086 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-h4pwz\" (UniqueName: \"kubernetes.io/projected/e3b00756-b775-4a1c-90b1-852a7f1712b7-kube-api-access-h4pwz\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f\" (UID: \"e3b00756-b775-4a1c-90b1-852a7f1712b7\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f" Jan 22 12:14:59 crc kubenswrapper[5120]: I0122 12:14:59.043135 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/e3b00756-b775-4a1c-90b1-852a7f1712b7-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f\" (UID: \"e3b00756-b775-4a1c-90b1-852a7f1712b7\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f" Jan 22 12:14:59 crc kubenswrapper[5120]: I0122 12:14:59.043212 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/e3b00756-b775-4a1c-90b1-852a7f1712b7-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f\" (UID: \"e3b00756-b775-4a1c-90b1-852a7f1712b7\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f" Jan 22 12:14:59 crc kubenswrapper[5120]: I0122 12:14:59.043237 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/e3b00756-b775-4a1c-90b1-852a7f1712b7-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f\" (UID: \"e3b00756-b775-4a1c-90b1-852a7f1712b7\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f" Jan 22 12:14:59 crc kubenswrapper[5120]: E0122 12:14:59.043418 5120 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Jan 22 12:14:59 crc kubenswrapper[5120]: E0122 12:14:59.043525 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3b00756-b775-4a1c-90b1-852a7f1712b7-default-cloud1-ceil-meter-proxy-tls podName:e3b00756-b775-4a1c-90b1-852a7f1712b7 nodeName:}" failed. No retries permitted until 2026-01-22 12:14:59.543500294 +0000 UTC m=+1634.287448635 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/e3b00756-b775-4a1c-90b1-852a7f1712b7-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f" (UID: "e3b00756-b775-4a1c-90b1-852a7f1712b7") : secret "default-cloud1-ceil-meter-proxy-tls" not found Jan 22 12:14:59 crc kubenswrapper[5120]: I0122 12:14:59.044659 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/e3b00756-b775-4a1c-90b1-852a7f1712b7-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f\" (UID: \"e3b00756-b775-4a1c-90b1-852a7f1712b7\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f" Jan 22 12:14:59 crc kubenswrapper[5120]: I0122 12:14:59.045699 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/e3b00756-b775-4a1c-90b1-852a7f1712b7-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f\" (UID: \"e3b00756-b775-4a1c-90b1-852a7f1712b7\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f" Jan 22 12:14:59 crc kubenswrapper[5120]: I0122 12:14:59.058908 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/e3b00756-b775-4a1c-90b1-852a7f1712b7-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f\" (UID: \"e3b00756-b775-4a1c-90b1-852a7f1712b7\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f" Jan 22 12:14:59 crc kubenswrapper[5120]: I0122 12:14:59.060989 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4pwz\" (UniqueName: \"kubernetes.io/projected/e3b00756-b775-4a1c-90b1-852a7f1712b7-kube-api-access-h4pwz\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f\" (UID: \"e3b00756-b775-4a1c-90b1-852a7f1712b7\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f" Jan 22 12:14:59 crc kubenswrapper[5120]: I0122 12:14:59.317552 5120 generic.go:358] "Generic (PLEG): container finished" podID="88fc8b5e-6a79-414c-8a72-7447f8db3056" containerID="30f1056fa40244c608160ccec3cf2c890121b8d494a730dc6f2221ba70fdffd2" exitCode=0 Jan 22 12:14:59 crc kubenswrapper[5120]: I0122 12:14:59.317670 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"88fc8b5e-6a79-414c-8a72-7447f8db3056","Type":"ContainerDied","Data":"30f1056fa40244c608160ccec3cf2c890121b8d494a730dc6f2221ba70fdffd2"} Jan 22 12:14:59 crc kubenswrapper[5120]: I0122 12:14:59.551242 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/e3b00756-b775-4a1c-90b1-852a7f1712b7-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f\" (UID: \"e3b00756-b775-4a1c-90b1-852a7f1712b7\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f" Jan 22 12:14:59 crc kubenswrapper[5120]: E0122 12:14:59.551453 5120 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Jan 22 12:14:59 crc kubenswrapper[5120]: E0122 12:14:59.551518 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3b00756-b775-4a1c-90b1-852a7f1712b7-default-cloud1-ceil-meter-proxy-tls podName:e3b00756-b775-4a1c-90b1-852a7f1712b7 nodeName:}" failed. No retries permitted until 2026-01-22 12:15:00.551500291 +0000 UTC m=+1635.295448622 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/e3b00756-b775-4a1c-90b1-852a7f1712b7-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f" (UID: "e3b00756-b775-4a1c-90b1-852a7f1712b7") : secret "default-cloud1-ceil-meter-proxy-tls" not found Jan 22 12:15:00 crc kubenswrapper[5120]: W0122 12:15:00.048320 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3caee9e_30bb_45fe_8ff9_2ef2a5f6d9a2.slice/crio-69e58dab5eb9de816be7ffe58d5b9b3d5415e201026f572e02dd3a91f3643400 WatchSource:0}: Error finding container 69e58dab5eb9de816be7ffe58d5b9b3d5415e201026f572e02dd3a91f3643400: Status 404 returned error can't find the container with id 69e58dab5eb9de816be7ffe58d5b9b3d5415e201026f572e02dd3a91f3643400 Jan 22 12:15:00 crc kubenswrapper[5120]: I0122 12:15:00.135254 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484735-6dctk"] Jan 22 12:15:00 crc kubenswrapper[5120]: I0122 12:15:00.572745 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/e3b00756-b775-4a1c-90b1-852a7f1712b7-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f\" (UID: \"e3b00756-b775-4a1c-90b1-852a7f1712b7\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f" Jan 22 12:15:00 crc kubenswrapper[5120]: I0122 12:15:00.600688 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/e3b00756-b775-4a1c-90b1-852a7f1712b7-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f\" (UID: \"e3b00756-b775-4a1c-90b1-852a7f1712b7\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f" Jan 22 12:15:00 crc kubenswrapper[5120]: I0122 12:15:00.726130 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f" Jan 22 12:15:00 crc kubenswrapper[5120]: I0122 12:15:00.993441 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484735-6dctk"] Jan 22 12:15:00 crc kubenswrapper[5120]: I0122 12:15:00.993495 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8" event={"ID":"d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2","Type":"ContainerStarted","Data":"69e58dab5eb9de816be7ffe58d5b9b3d5415e201026f572e02dd3a91f3643400"} Jan 22 12:15:00 crc kubenswrapper[5120]: I0122 12:15:00.993680 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484735-6dctk" Jan 22 12:15:00 crc kubenswrapper[5120]: I0122 12:15:00.997887 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 22 12:15:00 crc kubenswrapper[5120]: I0122 12:15:00.999012 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 22 12:15:01 crc kubenswrapper[5120]: I0122 12:15:01.081318 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5445dd15-192f-4528-92eb-f9507eb342c4-config-volume\") pod \"collect-profiles-29484735-6dctk\" (UID: \"5445dd15-192f-4528-92eb-f9507eb342c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484735-6dctk" Jan 22 12:15:01 crc kubenswrapper[5120]: I0122 12:15:01.081401 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5445dd15-192f-4528-92eb-f9507eb342c4-secret-volume\") pod \"collect-profiles-29484735-6dctk\" (UID: \"5445dd15-192f-4528-92eb-f9507eb342c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484735-6dctk" Jan 22 12:15:01 crc kubenswrapper[5120]: I0122 12:15:01.081906 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hffn\" (UniqueName: \"kubernetes.io/projected/5445dd15-192f-4528-92eb-f9507eb342c4-kube-api-access-6hffn\") pod \"collect-profiles-29484735-6dctk\" (UID: \"5445dd15-192f-4528-92eb-f9507eb342c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484735-6dctk" Jan 22 12:15:01 crc kubenswrapper[5120]: I0122 12:15:01.183283 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5445dd15-192f-4528-92eb-f9507eb342c4-config-volume\") pod \"collect-profiles-29484735-6dctk\" (UID: \"5445dd15-192f-4528-92eb-f9507eb342c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484735-6dctk" Jan 22 12:15:01 crc kubenswrapper[5120]: I0122 12:15:01.183550 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5445dd15-192f-4528-92eb-f9507eb342c4-secret-volume\") pod \"collect-profiles-29484735-6dctk\" (UID: \"5445dd15-192f-4528-92eb-f9507eb342c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484735-6dctk" Jan 22 12:15:01 crc kubenswrapper[5120]: I0122 12:15:01.183981 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6hffn\" (UniqueName: \"kubernetes.io/projected/5445dd15-192f-4528-92eb-f9507eb342c4-kube-api-access-6hffn\") pod \"collect-profiles-29484735-6dctk\" (UID: \"5445dd15-192f-4528-92eb-f9507eb342c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484735-6dctk" Jan 22 12:15:01 crc kubenswrapper[5120]: I0122 12:15:01.184759 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5445dd15-192f-4528-92eb-f9507eb342c4-config-volume\") pod \"collect-profiles-29484735-6dctk\" (UID: \"5445dd15-192f-4528-92eb-f9507eb342c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484735-6dctk" Jan 22 12:15:01 crc kubenswrapper[5120]: I0122 12:15:01.208341 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hffn\" (UniqueName: \"kubernetes.io/projected/5445dd15-192f-4528-92eb-f9507eb342c4-kube-api-access-6hffn\") pod \"collect-profiles-29484735-6dctk\" (UID: \"5445dd15-192f-4528-92eb-f9507eb342c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484735-6dctk" Jan 22 12:15:01 crc kubenswrapper[5120]: I0122 12:15:01.214688 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5445dd15-192f-4528-92eb-f9507eb342c4-secret-volume\") pod \"collect-profiles-29484735-6dctk\" (UID: \"5445dd15-192f-4528-92eb-f9507eb342c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484735-6dctk" Jan 22 12:15:01 crc kubenswrapper[5120]: I0122 12:15:01.318574 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484735-6dctk" Jan 22 12:15:02 crc kubenswrapper[5120]: I0122 12:15:02.392368 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f"] Jan 22 12:15:02 crc kubenswrapper[5120]: I0122 12:15:02.624793 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484735-6dctk"] Jan 22 12:15:02 crc kubenswrapper[5120]: I0122 12:15:02.789071 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x"] Jan 22 12:15:04 crc kubenswrapper[5120]: I0122 12:15:04.041747 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x"] Jan 22 12:15:04 crc kubenswrapper[5120]: I0122 12:15:04.042056 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x" Jan 22 12:15:04 crc kubenswrapper[5120]: I0122 12:15:04.056583 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-sens-meter-sg-core-configmap\"" Jan 22 12:15:04 crc kubenswrapper[5120]: I0122 12:15:04.056909 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-sens-meter-proxy-tls\"" Jan 22 12:15:04 crc kubenswrapper[5120]: I0122 12:15:04.166877 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/9836015c-341f-44a4-a0b1-2d155148b264-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x\" (UID: \"9836015c-341f-44a4-a0b1-2d155148b264\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x" Jan 22 12:15:04 crc kubenswrapper[5120]: I0122 12:15:04.167023 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/9836015c-341f-44a4-a0b1-2d155148b264-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x\" (UID: \"9836015c-341f-44a4-a0b1-2d155148b264\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x" Jan 22 12:15:04 crc kubenswrapper[5120]: I0122 12:15:04.167134 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/9836015c-341f-44a4-a0b1-2d155148b264-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x\" (UID: \"9836015c-341f-44a4-a0b1-2d155148b264\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x" Jan 22 12:15:04 crc kubenswrapper[5120]: I0122 12:15:04.167197 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wwvq\" (UniqueName: \"kubernetes.io/projected/9836015c-341f-44a4-a0b1-2d155148b264-kube-api-access-4wwvq\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x\" (UID: \"9836015c-341f-44a4-a0b1-2d155148b264\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x" Jan 22 12:15:04 crc kubenswrapper[5120]: I0122 12:15:04.167273 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/9836015c-341f-44a4-a0b1-2d155148b264-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x\" (UID: \"9836015c-341f-44a4-a0b1-2d155148b264\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x" Jan 22 12:15:04 crc kubenswrapper[5120]: I0122 12:15:04.269490 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4wwvq\" (UniqueName: \"kubernetes.io/projected/9836015c-341f-44a4-a0b1-2d155148b264-kube-api-access-4wwvq\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x\" (UID: \"9836015c-341f-44a4-a0b1-2d155148b264\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x" Jan 22 12:15:04 crc kubenswrapper[5120]: I0122 12:15:04.269980 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/9836015c-341f-44a4-a0b1-2d155148b264-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x\" (UID: \"9836015c-341f-44a4-a0b1-2d155148b264\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x" Jan 22 12:15:04 crc kubenswrapper[5120]: I0122 12:15:04.270059 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/9836015c-341f-44a4-a0b1-2d155148b264-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x\" (UID: \"9836015c-341f-44a4-a0b1-2d155148b264\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x" Jan 22 12:15:04 crc kubenswrapper[5120]: I0122 12:15:04.270237 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/9836015c-341f-44a4-a0b1-2d155148b264-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x\" (UID: \"9836015c-341f-44a4-a0b1-2d155148b264\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x" Jan 22 12:15:04 crc kubenswrapper[5120]: I0122 12:15:04.270341 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/9836015c-341f-44a4-a0b1-2d155148b264-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x\" (UID: \"9836015c-341f-44a4-a0b1-2d155148b264\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x" Jan 22 12:15:04 crc kubenswrapper[5120]: I0122 12:15:04.270877 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/9836015c-341f-44a4-a0b1-2d155148b264-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x\" (UID: \"9836015c-341f-44a4-a0b1-2d155148b264\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x" Jan 22 12:15:04 crc kubenswrapper[5120]: I0122 12:15:04.271326 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/9836015c-341f-44a4-a0b1-2d155148b264-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x\" (UID: \"9836015c-341f-44a4-a0b1-2d155148b264\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x" Jan 22 12:15:04 crc kubenswrapper[5120]: I0122 12:15:04.277311 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/9836015c-341f-44a4-a0b1-2d155148b264-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x\" (UID: \"9836015c-341f-44a4-a0b1-2d155148b264\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x" Jan 22 12:15:04 crc kubenswrapper[5120]: I0122 12:15:04.279413 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/9836015c-341f-44a4-a0b1-2d155148b264-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x\" (UID: \"9836015c-341f-44a4-a0b1-2d155148b264\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x" Jan 22 12:15:04 crc kubenswrapper[5120]: I0122 12:15:04.289759 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wwvq\" (UniqueName: \"kubernetes.io/projected/9836015c-341f-44a4-a0b1-2d155148b264-kube-api-access-4wwvq\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x\" (UID: \"9836015c-341f-44a4-a0b1-2d155148b264\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x" Jan 22 12:15:04 crc kubenswrapper[5120]: I0122 12:15:04.373536 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x" Jan 22 12:15:04 crc kubenswrapper[5120]: I0122 12:15:04.543745 5120 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 12:15:04 crc kubenswrapper[5120]: W0122 12:15:04.554235 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5445dd15_192f_4528_92eb_f9507eb342c4.slice/crio-0f766fa8f7b14734c4c130b3f9dfbafe7f6d28769f50bd5549f3b24e535173a1 WatchSource:0}: Error finding container 0f766fa8f7b14734c4c130b3f9dfbafe7f6d28769f50bd5549f3b24e535173a1: Status 404 returned error can't find the container with id 0f766fa8f7b14734c4c130b3f9dfbafe7f6d28769f50bd5549f3b24e535173a1 Jan 22 12:15:05 crc kubenswrapper[5120]: I0122 12:15:05.369012 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484735-6dctk" event={"ID":"5445dd15-192f-4528-92eb-f9507eb342c4","Type":"ContainerStarted","Data":"0f766fa8f7b14734c4c130b3f9dfbafe7f6d28769f50bd5549f3b24e535173a1"} Jan 22 12:15:05 crc kubenswrapper[5120]: I0122 12:15:05.370589 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f" event={"ID":"e3b00756-b775-4a1c-90b1-852a7f1712b7","Type":"ContainerStarted","Data":"d34f142542f8f75929c7974229697eb860737c0228fc61a137ea5912ad5fe315"} Jan 22 12:15:05 crc kubenswrapper[5120]: I0122 12:15:05.443632 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x"] Jan 22 12:15:05 crc kubenswrapper[5120]: W0122 12:15:05.493403 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9836015c_341f_44a4_a0b1_2d155148b264.slice/crio-829940e1b162f872468c8c8e4153fbb631007dbffbddd0d2bd3449be853859e2 WatchSource:0}: Error finding container 829940e1b162f872468c8c8e4153fbb631007dbffbddd0d2bd3449be853859e2: Status 404 returned error can't find the container with id 829940e1b162f872468c8c8e4153fbb631007dbffbddd0d2bd3449be853859e2 Jan 22 12:15:06 crc kubenswrapper[5120]: I0122 12:15:06.396927 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x" event={"ID":"9836015c-341f-44a4-a0b1-2d155148b264","Type":"ContainerStarted","Data":"829940e1b162f872468c8c8e4153fbb631007dbffbddd0d2bd3449be853859e2"} Jan 22 12:15:06 crc kubenswrapper[5120]: I0122 12:15:06.412608 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f" event={"ID":"e3b00756-b775-4a1c-90b1-852a7f1712b7","Type":"ContainerStarted","Data":"a6e0f981f486f38353addb18f494e615397c4a02727a7ff4e676ed27dc14fef0"} Jan 22 12:15:06 crc kubenswrapper[5120]: I0122 12:15:06.431540 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"af3a73d7-3578-4530-9916-0c3613d55591","Type":"ContainerStarted","Data":"4531be95f01863e65e2e98ca683f9ec6225692957186aed34aa39591d4778820"} Jan 22 12:15:06 crc kubenswrapper[5120]: I0122 12:15:06.452506 5120 generic.go:358] "Generic (PLEG): container finished" podID="5445dd15-192f-4528-92eb-f9507eb342c4" containerID="21cb135b3d3bfb01aa6f0319bccbb82d56dd92e0a9f8f4fb24aad8d3347005ef" exitCode=0 Jan 22 12:15:06 crc kubenswrapper[5120]: I0122 12:15:06.453032 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484735-6dctk" event={"ID":"5445dd15-192f-4528-92eb-f9507eb342c4","Type":"ContainerDied","Data":"21cb135b3d3bfb01aa6f0319bccbb82d56dd92e0a9f8f4fb24aad8d3347005ef"} Jan 22 12:15:06 crc kubenswrapper[5120]: I0122 12:15:06.469623 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"88fc8b5e-6a79-414c-8a72-7447f8db3056","Type":"ContainerStarted","Data":"18c931b48b2a489c0d341f03ada9db0324e8480268636d9729fb5334d4d8d860"} Jan 22 12:15:06 crc kubenswrapper[5120]: I0122 12:15:06.470791 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-default-0" podStartSLOduration=5.7351635 podStartE2EDuration="43.47077629s" podCreationTimestamp="2026-01-22 12:14:23 +0000 UTC" firstStartedPulling="2026-01-22 12:14:26.8200614 +0000 UTC m=+1601.564009741" lastFinishedPulling="2026-01-22 12:15:04.55567419 +0000 UTC m=+1639.299622531" observedRunningTime="2026-01-22 12:15:06.468494304 +0000 UTC m=+1641.212442655" watchObservedRunningTime="2026-01-22 12:15:06.47077629 +0000 UTC m=+1641.214724641" Jan 22 12:15:06 crc kubenswrapper[5120]: I0122 12:15:06.478466 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8" event={"ID":"d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2","Type":"ContainerStarted","Data":"7e7dda325c715430af27761b2a39be29ee48203c1dad63762cfd24e7d9e23e0a"} Jan 22 12:15:06 crc kubenswrapper[5120]: I0122 12:15:06.545809 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/prometheus-default-0" Jan 22 12:15:07 crc kubenswrapper[5120]: I0122 12:15:07.491180 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x" event={"ID":"9836015c-341f-44a4-a0b1-2d155148b264","Type":"ContainerStarted","Data":"b2a18027ba7ead04f02ba66effda4a3c4923f293ef04c4cba6c16d3c3826c19c"} Jan 22 12:15:08 crc kubenswrapper[5120]: I0122 12:15:08.011288 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484735-6dctk" Jan 22 12:15:08 crc kubenswrapper[5120]: I0122 12:15:08.143251 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5445dd15-192f-4528-92eb-f9507eb342c4-secret-volume\") pod \"5445dd15-192f-4528-92eb-f9507eb342c4\" (UID: \"5445dd15-192f-4528-92eb-f9507eb342c4\") " Jan 22 12:15:08 crc kubenswrapper[5120]: I0122 12:15:08.143408 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6hffn\" (UniqueName: \"kubernetes.io/projected/5445dd15-192f-4528-92eb-f9507eb342c4-kube-api-access-6hffn\") pod \"5445dd15-192f-4528-92eb-f9507eb342c4\" (UID: \"5445dd15-192f-4528-92eb-f9507eb342c4\") " Jan 22 12:15:08 crc kubenswrapper[5120]: I0122 12:15:08.143580 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5445dd15-192f-4528-92eb-f9507eb342c4-config-volume\") pod \"5445dd15-192f-4528-92eb-f9507eb342c4\" (UID: \"5445dd15-192f-4528-92eb-f9507eb342c4\") " Jan 22 12:15:08 crc kubenswrapper[5120]: I0122 12:15:08.144549 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5445dd15-192f-4528-92eb-f9507eb342c4-config-volume" (OuterVolumeSpecName: "config-volume") pod "5445dd15-192f-4528-92eb-f9507eb342c4" (UID: "5445dd15-192f-4528-92eb-f9507eb342c4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:15:08 crc kubenswrapper[5120]: I0122 12:15:08.152423 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5445dd15-192f-4528-92eb-f9507eb342c4-kube-api-access-6hffn" (OuterVolumeSpecName: "kube-api-access-6hffn") pod "5445dd15-192f-4528-92eb-f9507eb342c4" (UID: "5445dd15-192f-4528-92eb-f9507eb342c4"). InnerVolumeSpecName "kube-api-access-6hffn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:15:08 crc kubenswrapper[5120]: I0122 12:15:08.152487 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5445dd15-192f-4528-92eb-f9507eb342c4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5445dd15-192f-4528-92eb-f9507eb342c4" (UID: "5445dd15-192f-4528-92eb-f9507eb342c4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 12:15:08 crc kubenswrapper[5120]: I0122 12:15:08.246596 5120 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5445dd15-192f-4528-92eb-f9507eb342c4-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 22 12:15:08 crc kubenswrapper[5120]: I0122 12:15:08.246645 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6hffn\" (UniqueName: \"kubernetes.io/projected/5445dd15-192f-4528-92eb-f9507eb342c4-kube-api-access-6hffn\") on node \"crc\" DevicePath \"\"" Jan 22 12:15:08 crc kubenswrapper[5120]: I0122 12:15:08.246655 5120 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5445dd15-192f-4528-92eb-f9507eb342c4-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 12:15:08 crc kubenswrapper[5120]: I0122 12:15:08.502780 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f" event={"ID":"e3b00756-b775-4a1c-90b1-852a7f1712b7","Type":"ContainerStarted","Data":"e1d8c8ed095be6345cb8d0a5f794ec8f028217d079f94e974827a0b29f123d88"} Jan 22 12:15:08 crc kubenswrapper[5120]: I0122 12:15:08.505465 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484735-6dctk" Jan 22 12:15:08 crc kubenswrapper[5120]: I0122 12:15:08.505477 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484735-6dctk" event={"ID":"5445dd15-192f-4528-92eb-f9507eb342c4","Type":"ContainerDied","Data":"0f766fa8f7b14734c4c130b3f9dfbafe7f6d28769f50bd5549f3b24e535173a1"} Jan 22 12:15:08 crc kubenswrapper[5120]: I0122 12:15:08.505542 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f766fa8f7b14734c4c130b3f9dfbafe7f6d28769f50bd5549f3b24e535173a1" Jan 22 12:15:08 crc kubenswrapper[5120]: I0122 12:15:08.508184 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"88fc8b5e-6a79-414c-8a72-7447f8db3056","Type":"ContainerStarted","Data":"f0d4869d9e180c8bbdd71016c35f08e28f84ea8f2fec345086b185d2bd76264f"} Jan 22 12:15:08 crc kubenswrapper[5120]: I0122 12:15:08.510000 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8" event={"ID":"d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2","Type":"ContainerStarted","Data":"c6aeb92452de7be06be2214e00760877418176e7adeaa44cb0aebda9bf04c25b"} Jan 22 12:15:08 crc kubenswrapper[5120]: I0122 12:15:08.512907 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x" event={"ID":"9836015c-341f-44a4-a0b1-2d155148b264","Type":"ContainerStarted","Data":"5f16f1d46c062cbc552acd91ad7e0a4b3cab4d650f43db408caa84b76811fc0c"} Jan 22 12:15:11 crc kubenswrapper[5120]: I0122 12:15:11.546236 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/prometheus-default-0" Jan 22 12:15:11 crc kubenswrapper[5120]: I0122 12:15:11.591929 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/prometheus-default-0" Jan 22 12:15:11 crc kubenswrapper[5120]: I0122 12:15:11.902997 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v"] Jan 22 12:15:11 crc kubenswrapper[5120]: I0122 12:15:11.903735 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5445dd15-192f-4528-92eb-f9507eb342c4" containerName="collect-profiles" Jan 22 12:15:11 crc kubenswrapper[5120]: I0122 12:15:11.903759 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="5445dd15-192f-4528-92eb-f9507eb342c4" containerName="collect-profiles" Jan 22 12:15:11 crc kubenswrapper[5120]: I0122 12:15:11.903891 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="5445dd15-192f-4528-92eb-f9507eb342c4" containerName="collect-profiles" Jan 22 12:15:15 crc kubenswrapper[5120]: I0122 12:15:15.097916 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v"] Jan 22 12:15:15 crc kubenswrapper[5120]: I0122 12:15:15.098491 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v" Jan 22 12:15:15 crc kubenswrapper[5120]: I0122 12:15:15.102219 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-cert\"" Jan 22 12:15:15 crc kubenswrapper[5120]: I0122 12:15:15.102608 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-event-sg-core-configmap\"" Jan 22 12:15:15 crc kubenswrapper[5120]: I0122 12:15:15.153096 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/prometheus-default-0" Jan 22 12:15:15 crc kubenswrapper[5120]: I0122 12:15:15.185862 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/f2b79a21-0ce0-4563-9ea9-d7cd1e19652d-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v\" (UID: \"f2b79a21-0ce0-4563-9ea9-d7cd1e19652d\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v" Jan 22 12:15:15 crc kubenswrapper[5120]: I0122 12:15:15.185932 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4lxd\" (UniqueName: \"kubernetes.io/projected/f2b79a21-0ce0-4563-9ea9-d7cd1e19652d-kube-api-access-w4lxd\") pod \"default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v\" (UID: \"f2b79a21-0ce0-4563-9ea9-d7cd1e19652d\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v" Jan 22 12:15:15 crc kubenswrapper[5120]: I0122 12:15:15.186015 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/f2b79a21-0ce0-4563-9ea9-d7cd1e19652d-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v\" (UID: \"f2b79a21-0ce0-4563-9ea9-d7cd1e19652d\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v" Jan 22 12:15:15 crc kubenswrapper[5120]: I0122 12:15:15.186177 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/f2b79a21-0ce0-4563-9ea9-d7cd1e19652d-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v\" (UID: \"f2b79a21-0ce0-4563-9ea9-d7cd1e19652d\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v" Jan 22 12:15:15 crc kubenswrapper[5120]: I0122 12:15:15.288421 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/f2b79a21-0ce0-4563-9ea9-d7cd1e19652d-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v\" (UID: \"f2b79a21-0ce0-4563-9ea9-d7cd1e19652d\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v" Jan 22 12:15:15 crc kubenswrapper[5120]: I0122 12:15:15.288493 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w4lxd\" (UniqueName: \"kubernetes.io/projected/f2b79a21-0ce0-4563-9ea9-d7cd1e19652d-kube-api-access-w4lxd\") pod \"default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v\" (UID: \"f2b79a21-0ce0-4563-9ea9-d7cd1e19652d\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v" Jan 22 12:15:15 crc kubenswrapper[5120]: I0122 12:15:15.288541 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/f2b79a21-0ce0-4563-9ea9-d7cd1e19652d-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v\" (UID: \"f2b79a21-0ce0-4563-9ea9-d7cd1e19652d\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v" Jan 22 12:15:15 crc kubenswrapper[5120]: I0122 12:15:15.288597 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/f2b79a21-0ce0-4563-9ea9-d7cd1e19652d-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v\" (UID: \"f2b79a21-0ce0-4563-9ea9-d7cd1e19652d\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v" Jan 22 12:15:15 crc kubenswrapper[5120]: I0122 12:15:15.289833 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/f2b79a21-0ce0-4563-9ea9-d7cd1e19652d-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v\" (UID: \"f2b79a21-0ce0-4563-9ea9-d7cd1e19652d\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v" Jan 22 12:15:15 crc kubenswrapper[5120]: I0122 12:15:15.290083 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/f2b79a21-0ce0-4563-9ea9-d7cd1e19652d-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v\" (UID: \"f2b79a21-0ce0-4563-9ea9-d7cd1e19652d\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v" Jan 22 12:15:15 crc kubenswrapper[5120]: I0122 12:15:15.300997 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/f2b79a21-0ce0-4563-9ea9-d7cd1e19652d-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v\" (UID: \"f2b79a21-0ce0-4563-9ea9-d7cd1e19652d\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v" Jan 22 12:15:15 crc kubenswrapper[5120]: I0122 12:15:15.307064 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4lxd\" (UniqueName: \"kubernetes.io/projected/f2b79a21-0ce0-4563-9ea9-d7cd1e19652d-kube-api-access-w4lxd\") pod \"default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v\" (UID: \"f2b79a21-0ce0-4563-9ea9-d7cd1e19652d\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v" Jan 22 12:15:15 crc kubenswrapper[5120]: I0122 12:15:15.418367 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v" Jan 22 12:15:15 crc kubenswrapper[5120]: I0122 12:15:15.555575 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789"] Jan 22 12:15:16 crc kubenswrapper[5120]: I0122 12:15:16.753357 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789"] Jan 22 12:15:16 crc kubenswrapper[5120]: I0122 12:15:16.755484 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789" Jan 22 12:15:16 crc kubenswrapper[5120]: I0122 12:15:16.763884 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-event-sg-core-configmap\"" Jan 22 12:15:16 crc kubenswrapper[5120]: I0122 12:15:16.834547 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5a872b8-950f-422a-9b1d-aaf761e5295c-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789\" (UID: \"c5a872b8-950f-422a-9b1d-aaf761e5295c\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789" Jan 22 12:15:16 crc kubenswrapper[5120]: I0122 12:15:16.834620 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/c5a872b8-950f-422a-9b1d-aaf761e5295c-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789\" (UID: \"c5a872b8-950f-422a-9b1d-aaf761e5295c\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789" Jan 22 12:15:16 crc kubenswrapper[5120]: I0122 12:15:16.834656 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klz99\" (UniqueName: \"kubernetes.io/projected/c5a872b8-950f-422a-9b1d-aaf761e5295c-kube-api-access-klz99\") pod \"default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789\" (UID: \"c5a872b8-950f-422a-9b1d-aaf761e5295c\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789" Jan 22 12:15:16 crc kubenswrapper[5120]: I0122 12:15:16.834747 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/c5a872b8-950f-422a-9b1d-aaf761e5295c-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789\" (UID: \"c5a872b8-950f-422a-9b1d-aaf761e5295c\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789" Jan 22 12:15:16 crc kubenswrapper[5120]: I0122 12:15:16.936889 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5a872b8-950f-422a-9b1d-aaf761e5295c-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789\" (UID: \"c5a872b8-950f-422a-9b1d-aaf761e5295c\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789" Jan 22 12:15:16 crc kubenswrapper[5120]: I0122 12:15:16.937017 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/c5a872b8-950f-422a-9b1d-aaf761e5295c-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789\" (UID: \"c5a872b8-950f-422a-9b1d-aaf761e5295c\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789" Jan 22 12:15:16 crc kubenswrapper[5120]: I0122 12:15:16.937213 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-klz99\" (UniqueName: \"kubernetes.io/projected/c5a872b8-950f-422a-9b1d-aaf761e5295c-kube-api-access-klz99\") pod \"default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789\" (UID: \"c5a872b8-950f-422a-9b1d-aaf761e5295c\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789" Jan 22 12:15:16 crc kubenswrapper[5120]: I0122 12:15:16.937254 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/c5a872b8-950f-422a-9b1d-aaf761e5295c-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789\" (UID: \"c5a872b8-950f-422a-9b1d-aaf761e5295c\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789" Jan 22 12:15:16 crc kubenswrapper[5120]: I0122 12:15:16.937383 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5a872b8-950f-422a-9b1d-aaf761e5295c-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789\" (UID: \"c5a872b8-950f-422a-9b1d-aaf761e5295c\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789" Jan 22 12:15:16 crc kubenswrapper[5120]: I0122 12:15:16.938024 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/c5a872b8-950f-422a-9b1d-aaf761e5295c-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789\" (UID: \"c5a872b8-950f-422a-9b1d-aaf761e5295c\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789" Jan 22 12:15:16 crc kubenswrapper[5120]: I0122 12:15:16.959013 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/c5a872b8-950f-422a-9b1d-aaf761e5295c-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789\" (UID: \"c5a872b8-950f-422a-9b1d-aaf761e5295c\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789" Jan 22 12:15:16 crc kubenswrapper[5120]: I0122 12:15:16.971925 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-klz99\" (UniqueName: \"kubernetes.io/projected/c5a872b8-950f-422a-9b1d-aaf761e5295c-kube-api-access-klz99\") pod \"default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789\" (UID: \"c5a872b8-950f-422a-9b1d-aaf761e5295c\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789" Jan 22 12:15:17 crc kubenswrapper[5120]: I0122 12:15:17.083025 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789" Jan 22 12:15:20 crc kubenswrapper[5120]: I0122 12:15:20.618110 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789"] Jan 22 12:15:20 crc kubenswrapper[5120]: I0122 12:15:20.643041 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x" event={"ID":"9836015c-341f-44a4-a0b1-2d155148b264","Type":"ContainerStarted","Data":"4a63e34c1ffbd75e3c65e9e084a7dd1b67521626f1f3f4fd7badd98b6697470f"} Jan 22 12:15:20 crc kubenswrapper[5120]: I0122 12:15:20.646156 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f" event={"ID":"e3b00756-b775-4a1c-90b1-852a7f1712b7","Type":"ContainerStarted","Data":"c5b43c7d4d175f20607714924f607b2cab0d2d7acd443bfc7099ba9e09ffec32"} Jan 22 12:15:20 crc kubenswrapper[5120]: I0122 12:15:20.651038 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"88fc8b5e-6a79-414c-8a72-7447f8db3056","Type":"ContainerStarted","Data":"82b888440747885e73b0062a3decda119b5c046feb43a820a60bd17e9f0ceea8"} Jan 22 12:15:20 crc kubenswrapper[5120]: I0122 12:15:20.653247 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8" event={"ID":"d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2","Type":"ContainerStarted","Data":"685a5c6249805ca15d8f5185f3b283536087b10b109e7643b1c22aeafe4b8bd1"} Jan 22 12:15:20 crc kubenswrapper[5120]: I0122 12:15:20.655042 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789" event={"ID":"c5a872b8-950f-422a-9b1d-aaf761e5295c","Type":"ContainerStarted","Data":"7a1e68c2b807cc5db771eb6f695c299dd8632926aa5a511b0837ee2d145343df"} Jan 22 12:15:20 crc kubenswrapper[5120]: I0122 12:15:20.695798 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x" podStartSLOduration=4.03966137 podStartE2EDuration="18.695772781s" podCreationTimestamp="2026-01-22 12:15:02 +0000 UTC" firstStartedPulling="2026-01-22 12:15:05.496675523 +0000 UTC m=+1640.240623864" lastFinishedPulling="2026-01-22 12:15:20.152786934 +0000 UTC m=+1654.896735275" observedRunningTime="2026-01-22 12:15:20.664187483 +0000 UTC m=+1655.408135824" watchObservedRunningTime="2026-01-22 12:15:20.695772781 +0000 UTC m=+1655.439721122" Jan 22 12:15:20 crc kubenswrapper[5120]: I0122 12:15:20.709696 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v"] Jan 22 12:15:20 crc kubenswrapper[5120]: I0122 12:15:20.719745 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f" podStartSLOduration=7.051880252 podStartE2EDuration="22.719707262s" podCreationTimestamp="2026-01-22 12:14:58 +0000 UTC" firstStartedPulling="2026-01-22 12:15:04.543902335 +0000 UTC m=+1639.287850676" lastFinishedPulling="2026-01-22 12:15:20.211729355 +0000 UTC m=+1654.955677686" observedRunningTime="2026-01-22 12:15:20.702387782 +0000 UTC m=+1655.446336123" watchObservedRunningTime="2026-01-22 12:15:20.719707262 +0000 UTC m=+1655.463655603" Jan 22 12:15:20 crc kubenswrapper[5120]: I0122 12:15:20.735510 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/alertmanager-default-0" podStartSLOduration=24.892188595 podStartE2EDuration="43.735487405s" podCreationTimestamp="2026-01-22 12:14:37 +0000 UTC" firstStartedPulling="2026-01-22 12:14:59.319502927 +0000 UTC m=+1634.063451278" lastFinishedPulling="2026-01-22 12:15:18.162801747 +0000 UTC m=+1652.906750088" observedRunningTime="2026-01-22 12:15:20.728450554 +0000 UTC m=+1655.472398915" watchObservedRunningTime="2026-01-22 12:15:20.735487405 +0000 UTC m=+1655.479435736" Jan 22 12:15:20 crc kubenswrapper[5120]: I0122 12:15:20.761568 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8" podStartSLOduration=5.615158932 podStartE2EDuration="25.761544588s" podCreationTimestamp="2026-01-22 12:14:55 +0000 UTC" firstStartedPulling="2026-01-22 12:15:00.049880774 +0000 UTC m=+1634.793829105" lastFinishedPulling="2026-01-22 12:15:20.19626642 +0000 UTC m=+1654.940214761" observedRunningTime="2026-01-22 12:15:20.757193463 +0000 UTC m=+1655.501141814" watchObservedRunningTime="2026-01-22 12:15:20.761544588 +0000 UTC m=+1655.505492929" Jan 22 12:15:21 crc kubenswrapper[5120]: I0122 12:15:21.665064 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v" event={"ID":"f2b79a21-0ce0-4563-9ea9-d7cd1e19652d","Type":"ContainerStarted","Data":"987f3eb973e91c81504a3418c1f8a80a647f84502e1f5cae44699d863ab161f1"} Jan 22 12:15:21 crc kubenswrapper[5120]: I0122 12:15:21.665418 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v" event={"ID":"f2b79a21-0ce0-4563-9ea9-d7cd1e19652d","Type":"ContainerStarted","Data":"9ec1c38b8bad47a50b0391be1aaf44b110a525113a7cbaedcbf781e01d53c413"} Jan 22 12:15:21 crc kubenswrapper[5120]: I0122 12:15:21.665437 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v" event={"ID":"f2b79a21-0ce0-4563-9ea9-d7cd1e19652d","Type":"ContainerStarted","Data":"dc07db03994e16c1a351f1718a043a8e78984fa524399e878a4d263e3c4c812c"} Jan 22 12:15:21 crc kubenswrapper[5120]: I0122 12:15:21.668801 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789" event={"ID":"c5a872b8-950f-422a-9b1d-aaf761e5295c","Type":"ContainerStarted","Data":"c92b7f18e3f80496e34770f60f788dc96266c13d766c754380cb51da51e2f377"} Jan 22 12:15:21 crc kubenswrapper[5120]: I0122 12:15:21.668850 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789" event={"ID":"c5a872b8-950f-422a-9b1d-aaf761e5295c","Type":"ContainerStarted","Data":"d70e84c4805e3abc4485f6e976fabc66057dc851ff94b34902b82b744cc891a2"} Jan 22 12:15:21 crc kubenswrapper[5120]: I0122 12:15:21.706070 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v" podStartSLOduration=10.24052666 podStartE2EDuration="10.706049015s" podCreationTimestamp="2026-01-22 12:15:11 +0000 UTC" firstStartedPulling="2026-01-22 12:15:20.707162747 +0000 UTC m=+1655.451111088" lastFinishedPulling="2026-01-22 12:15:21.172685102 +0000 UTC m=+1655.916633443" observedRunningTime="2026-01-22 12:15:21.687384702 +0000 UTC m=+1656.431333043" watchObservedRunningTime="2026-01-22 12:15:21.706049015 +0000 UTC m=+1656.449997356" Jan 22 12:15:21 crc kubenswrapper[5120]: I0122 12:15:21.708195 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789" podStartSLOduration=6.24051379 podStartE2EDuration="6.708187818s" podCreationTimestamp="2026-01-22 12:15:15 +0000 UTC" firstStartedPulling="2026-01-22 12:15:20.6232738 +0000 UTC m=+1655.367222141" lastFinishedPulling="2026-01-22 12:15:21.090947828 +0000 UTC m=+1655.834896169" observedRunningTime="2026-01-22 12:15:21.702370777 +0000 UTC m=+1656.446319128" watchObservedRunningTime="2026-01-22 12:15:21.708187818 +0000 UTC m=+1656.452136159" Jan 22 12:15:28 crc kubenswrapper[5120]: I0122 12:15:28.616824 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-zgrdr"] Jan 22 12:15:28 crc kubenswrapper[5120]: I0122 12:15:28.617797 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/default-interconnect-55bf8d5cb-zgrdr" podUID="f4812e83-6f17-4bad-8aaa-1521eb0b590f" containerName="default-interconnect" containerID="cri-o://0d4764f8cb2010a2330da137cf47631a4f97251072e950235bdfec5d58620ae3" gracePeriod=30 Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.123352 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-zgrdr" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.162374 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-48w6f"] Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.163292 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f4812e83-6f17-4bad-8aaa-1521eb0b590f" containerName="default-interconnect" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.163321 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4812e83-6f17-4bad-8aaa-1521eb0b590f" containerName="default-interconnect" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.163479 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="f4812e83-6f17-4bad-8aaa-1521eb0b590f" containerName="default-interconnect" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.168643 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-48w6f" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.182581 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-48w6f"] Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.235127 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/f4812e83-6f17-4bad-8aaa-1521eb0b590f-sasl-config\") pod \"f4812e83-6f17-4bad-8aaa-1521eb0b590f\" (UID: \"f4812e83-6f17-4bad-8aaa-1521eb0b590f\") " Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.235244 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/f4812e83-6f17-4bad-8aaa-1521eb0b590f-default-interconnect-openstack-credentials\") pod \"f4812e83-6f17-4bad-8aaa-1521eb0b590f\" (UID: \"f4812e83-6f17-4bad-8aaa-1521eb0b590f\") " Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.235469 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmtv5\" (UniqueName: \"kubernetes.io/projected/f4812e83-6f17-4bad-8aaa-1521eb0b590f-kube-api-access-jmtv5\") pod \"f4812e83-6f17-4bad-8aaa-1521eb0b590f\" (UID: \"f4812e83-6f17-4bad-8aaa-1521eb0b590f\") " Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.236256 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4812e83-6f17-4bad-8aaa-1521eb0b590f-sasl-config" (OuterVolumeSpecName: "sasl-config") pod "f4812e83-6f17-4bad-8aaa-1521eb0b590f" (UID: "f4812e83-6f17-4bad-8aaa-1521eb0b590f"). InnerVolumeSpecName "sasl-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.236344 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/f4812e83-6f17-4bad-8aaa-1521eb0b590f-default-interconnect-inter-router-ca\") pod \"f4812e83-6f17-4bad-8aaa-1521eb0b590f\" (UID: \"f4812e83-6f17-4bad-8aaa-1521eb0b590f\") " Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.236402 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/f4812e83-6f17-4bad-8aaa-1521eb0b590f-default-interconnect-inter-router-credentials\") pod \"f4812e83-6f17-4bad-8aaa-1521eb0b590f\" (UID: \"f4812e83-6f17-4bad-8aaa-1521eb0b590f\") " Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.236426 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/f4812e83-6f17-4bad-8aaa-1521eb0b590f-sasl-users\") pod \"f4812e83-6f17-4bad-8aaa-1521eb0b590f\" (UID: \"f4812e83-6f17-4bad-8aaa-1521eb0b590f\") " Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.236506 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/f4812e83-6f17-4bad-8aaa-1521eb0b590f-default-interconnect-openstack-ca\") pod \"f4812e83-6f17-4bad-8aaa-1521eb0b590f\" (UID: \"f4812e83-6f17-4bad-8aaa-1521eb0b590f\") " Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.236826 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/a388a8ad-2606-4be5-9640-e8b11efa3daa-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-48w6f\" (UID: \"a388a8ad-2606-4be5-9640-e8b11efa3daa\") " pod="service-telemetry/default-interconnect-55bf8d5cb-48w6f" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.238162 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/a388a8ad-2606-4be5-9640-e8b11efa3daa-sasl-config\") pod \"default-interconnect-55bf8d5cb-48w6f\" (UID: \"a388a8ad-2606-4be5-9640-e8b11efa3daa\") " pod="service-telemetry/default-interconnect-55bf8d5cb-48w6f" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.238338 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/a388a8ad-2606-4be5-9640-e8b11efa3daa-sasl-users\") pod \"default-interconnect-55bf8d5cb-48w6f\" (UID: \"a388a8ad-2606-4be5-9640-e8b11efa3daa\") " pod="service-telemetry/default-interconnect-55bf8d5cb-48w6f" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.238374 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/a388a8ad-2606-4be5-9640-e8b11efa3daa-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-48w6f\" (UID: \"a388a8ad-2606-4be5-9640-e8b11efa3daa\") " pod="service-telemetry/default-interconnect-55bf8d5cb-48w6f" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.238431 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/a388a8ad-2606-4be5-9640-e8b11efa3daa-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-48w6f\" (UID: \"a388a8ad-2606-4be5-9640-e8b11efa3daa\") " pod="service-telemetry/default-interconnect-55bf8d5cb-48w6f" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.238617 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tf4pt\" (UniqueName: \"kubernetes.io/projected/a388a8ad-2606-4be5-9640-e8b11efa3daa-kube-api-access-tf4pt\") pod \"default-interconnect-55bf8d5cb-48w6f\" (UID: \"a388a8ad-2606-4be5-9640-e8b11efa3daa\") " pod="service-telemetry/default-interconnect-55bf8d5cb-48w6f" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.238706 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/a388a8ad-2606-4be5-9640-e8b11efa3daa-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-48w6f\" (UID: \"a388a8ad-2606-4be5-9640-e8b11efa3daa\") " pod="service-telemetry/default-interconnect-55bf8d5cb-48w6f" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.238771 5120 reconciler_common.go:299] "Volume detached for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/f4812e83-6f17-4bad-8aaa-1521eb0b590f-sasl-config\") on node \"crc\" DevicePath \"\"" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.244298 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4812e83-6f17-4bad-8aaa-1521eb0b590f-default-interconnect-openstack-credentials" (OuterVolumeSpecName: "default-interconnect-openstack-credentials") pod "f4812e83-6f17-4bad-8aaa-1521eb0b590f" (UID: "f4812e83-6f17-4bad-8aaa-1521eb0b590f"). InnerVolumeSpecName "default-interconnect-openstack-credentials". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.244331 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4812e83-6f17-4bad-8aaa-1521eb0b590f-default-interconnect-openstack-ca" (OuterVolumeSpecName: "default-interconnect-openstack-ca") pod "f4812e83-6f17-4bad-8aaa-1521eb0b590f" (UID: "f4812e83-6f17-4bad-8aaa-1521eb0b590f"). InnerVolumeSpecName "default-interconnect-openstack-ca". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.244382 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4812e83-6f17-4bad-8aaa-1521eb0b590f-default-interconnect-inter-router-credentials" (OuterVolumeSpecName: "default-interconnect-inter-router-credentials") pod "f4812e83-6f17-4bad-8aaa-1521eb0b590f" (UID: "f4812e83-6f17-4bad-8aaa-1521eb0b590f"). InnerVolumeSpecName "default-interconnect-inter-router-credentials". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.246703 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4812e83-6f17-4bad-8aaa-1521eb0b590f-sasl-users" (OuterVolumeSpecName: "sasl-users") pod "f4812e83-6f17-4bad-8aaa-1521eb0b590f" (UID: "f4812e83-6f17-4bad-8aaa-1521eb0b590f"). InnerVolumeSpecName "sasl-users". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.247007 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4812e83-6f17-4bad-8aaa-1521eb0b590f-kube-api-access-jmtv5" (OuterVolumeSpecName: "kube-api-access-jmtv5") pod "f4812e83-6f17-4bad-8aaa-1521eb0b590f" (UID: "f4812e83-6f17-4bad-8aaa-1521eb0b590f"). InnerVolumeSpecName "kube-api-access-jmtv5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.266853 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4812e83-6f17-4bad-8aaa-1521eb0b590f-default-interconnect-inter-router-ca" (OuterVolumeSpecName: "default-interconnect-inter-router-ca") pod "f4812e83-6f17-4bad-8aaa-1521eb0b590f" (UID: "f4812e83-6f17-4bad-8aaa-1521eb0b590f"). InnerVolumeSpecName "default-interconnect-inter-router-ca". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.339887 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tf4pt\" (UniqueName: \"kubernetes.io/projected/a388a8ad-2606-4be5-9640-e8b11efa3daa-kube-api-access-tf4pt\") pod \"default-interconnect-55bf8d5cb-48w6f\" (UID: \"a388a8ad-2606-4be5-9640-e8b11efa3daa\") " pod="service-telemetry/default-interconnect-55bf8d5cb-48w6f" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.339987 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/a388a8ad-2606-4be5-9640-e8b11efa3daa-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-48w6f\" (UID: \"a388a8ad-2606-4be5-9640-e8b11efa3daa\") " pod="service-telemetry/default-interconnect-55bf8d5cb-48w6f" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.340399 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/a388a8ad-2606-4be5-9640-e8b11efa3daa-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-48w6f\" (UID: \"a388a8ad-2606-4be5-9640-e8b11efa3daa\") " pod="service-telemetry/default-interconnect-55bf8d5cb-48w6f" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.340450 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/a388a8ad-2606-4be5-9640-e8b11efa3daa-sasl-config\") pod \"default-interconnect-55bf8d5cb-48w6f\" (UID: \"a388a8ad-2606-4be5-9640-e8b11efa3daa\") " pod="service-telemetry/default-interconnect-55bf8d5cb-48w6f" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.340501 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/a388a8ad-2606-4be5-9640-e8b11efa3daa-sasl-users\") pod \"default-interconnect-55bf8d5cb-48w6f\" (UID: \"a388a8ad-2606-4be5-9640-e8b11efa3daa\") " pod="service-telemetry/default-interconnect-55bf8d5cb-48w6f" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.340533 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/a388a8ad-2606-4be5-9640-e8b11efa3daa-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-48w6f\" (UID: \"a388a8ad-2606-4be5-9640-e8b11efa3daa\") " pod="service-telemetry/default-interconnect-55bf8d5cb-48w6f" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.340584 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/a388a8ad-2606-4be5-9640-e8b11efa3daa-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-48w6f\" (UID: \"a388a8ad-2606-4be5-9640-e8b11efa3daa\") " pod="service-telemetry/default-interconnect-55bf8d5cb-48w6f" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.340684 5120 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/f4812e83-6f17-4bad-8aaa-1521eb0b590f-default-interconnect-openstack-ca\") on node \"crc\" DevicePath \"\"" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.340709 5120 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/f4812e83-6f17-4bad-8aaa-1521eb0b590f-default-interconnect-openstack-credentials\") on node \"crc\" DevicePath \"\"" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.340727 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jmtv5\" (UniqueName: \"kubernetes.io/projected/f4812e83-6f17-4bad-8aaa-1521eb0b590f-kube-api-access-jmtv5\") on node \"crc\" DevicePath \"\"" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.340740 5120 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/f4812e83-6f17-4bad-8aaa-1521eb0b590f-default-interconnect-inter-router-ca\") on node \"crc\" DevicePath \"\"" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.340753 5120 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/f4812e83-6f17-4bad-8aaa-1521eb0b590f-default-interconnect-inter-router-credentials\") on node \"crc\" DevicePath \"\"" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.340770 5120 reconciler_common.go:299] "Volume detached for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/f4812e83-6f17-4bad-8aaa-1521eb0b590f-sasl-users\") on node \"crc\" DevicePath \"\"" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.342428 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/a388a8ad-2606-4be5-9640-e8b11efa3daa-sasl-config\") pod \"default-interconnect-55bf8d5cb-48w6f\" (UID: \"a388a8ad-2606-4be5-9640-e8b11efa3daa\") " pod="service-telemetry/default-interconnect-55bf8d5cb-48w6f" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.347026 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/a388a8ad-2606-4be5-9640-e8b11efa3daa-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-48w6f\" (UID: \"a388a8ad-2606-4be5-9640-e8b11efa3daa\") " pod="service-telemetry/default-interconnect-55bf8d5cb-48w6f" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.347145 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/a388a8ad-2606-4be5-9640-e8b11efa3daa-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-48w6f\" (UID: \"a388a8ad-2606-4be5-9640-e8b11efa3daa\") " pod="service-telemetry/default-interconnect-55bf8d5cb-48w6f" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.347395 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/a388a8ad-2606-4be5-9640-e8b11efa3daa-sasl-users\") pod \"default-interconnect-55bf8d5cb-48w6f\" (UID: \"a388a8ad-2606-4be5-9640-e8b11efa3daa\") " pod="service-telemetry/default-interconnect-55bf8d5cb-48w6f" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.347654 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/a388a8ad-2606-4be5-9640-e8b11efa3daa-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-48w6f\" (UID: \"a388a8ad-2606-4be5-9640-e8b11efa3daa\") " pod="service-telemetry/default-interconnect-55bf8d5cb-48w6f" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.347745 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/a388a8ad-2606-4be5-9640-e8b11efa3daa-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-48w6f\" (UID: \"a388a8ad-2606-4be5-9640-e8b11efa3daa\") " pod="service-telemetry/default-interconnect-55bf8d5cb-48w6f" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.359934 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tf4pt\" (UniqueName: \"kubernetes.io/projected/a388a8ad-2606-4be5-9640-e8b11efa3daa-kube-api-access-tf4pt\") pod \"default-interconnect-55bf8d5cb-48w6f\" (UID: \"a388a8ad-2606-4be5-9640-e8b11efa3daa\") " pod="service-telemetry/default-interconnect-55bf8d5cb-48w6f" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.485917 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-48w6f" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.756149 5120 generic.go:358] "Generic (PLEG): container finished" podID="f2b79a21-0ce0-4563-9ea9-d7cd1e19652d" containerID="9ec1c38b8bad47a50b0391be1aaf44b110a525113a7cbaedcbf781e01d53c413" exitCode=0 Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.756801 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v" event={"ID":"f2b79a21-0ce0-4563-9ea9-d7cd1e19652d","Type":"ContainerDied","Data":"9ec1c38b8bad47a50b0391be1aaf44b110a525113a7cbaedcbf781e01d53c413"} Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.757633 5120 scope.go:117] "RemoveContainer" containerID="9ec1c38b8bad47a50b0391be1aaf44b110a525113a7cbaedcbf781e01d53c413" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.766461 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-48w6f"] Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.777542 5120 generic.go:358] "Generic (PLEG): container finished" podID="d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2" containerID="c6aeb92452de7be06be2214e00760877418176e7adeaa44cb0aebda9bf04c25b" exitCode=0 Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.777780 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8" event={"ID":"d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2","Type":"ContainerDied","Data":"c6aeb92452de7be06be2214e00760877418176e7adeaa44cb0aebda9bf04c25b"} Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.778494 5120 scope.go:117] "RemoveContainer" containerID="c6aeb92452de7be06be2214e00760877418176e7adeaa44cb0aebda9bf04c25b" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.785969 5120 generic.go:358] "Generic (PLEG): container finished" podID="f4812e83-6f17-4bad-8aaa-1521eb0b590f" containerID="0d4764f8cb2010a2330da137cf47631a4f97251072e950235bdfec5d58620ae3" exitCode=0 Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.786047 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-zgrdr" event={"ID":"f4812e83-6f17-4bad-8aaa-1521eb0b590f","Type":"ContainerDied","Data":"0d4764f8cb2010a2330da137cf47631a4f97251072e950235bdfec5d58620ae3"} Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.786076 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-zgrdr" event={"ID":"f4812e83-6f17-4bad-8aaa-1521eb0b590f","Type":"ContainerDied","Data":"11628814cb12f23bc6c37dd57728341ba4c21021b5a9ed812a9f0c32aac8439a"} Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.786095 5120 scope.go:117] "RemoveContainer" containerID="0d4764f8cb2010a2330da137cf47631a4f97251072e950235bdfec5d58620ae3" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.786270 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-zgrdr" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.800758 5120 generic.go:358] "Generic (PLEG): container finished" podID="c5a872b8-950f-422a-9b1d-aaf761e5295c" containerID="d70e84c4805e3abc4485f6e976fabc66057dc851ff94b34902b82b744cc891a2" exitCode=0 Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.801364 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789" event={"ID":"c5a872b8-950f-422a-9b1d-aaf761e5295c","Type":"ContainerDied","Data":"d70e84c4805e3abc4485f6e976fabc66057dc851ff94b34902b82b744cc891a2"} Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.802324 5120 scope.go:117] "RemoveContainer" containerID="d70e84c4805e3abc4485f6e976fabc66057dc851ff94b34902b82b744cc891a2" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.809894 5120 generic.go:358] "Generic (PLEG): container finished" podID="9836015c-341f-44a4-a0b1-2d155148b264" containerID="5f16f1d46c062cbc552acd91ad7e0a4b3cab4d650f43db408caa84b76811fc0c" exitCode=0 Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.810014 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x" event={"ID":"9836015c-341f-44a4-a0b1-2d155148b264","Type":"ContainerDied","Data":"5f16f1d46c062cbc552acd91ad7e0a4b3cab4d650f43db408caa84b76811fc0c"} Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.810660 5120 scope.go:117] "RemoveContainer" containerID="5f16f1d46c062cbc552acd91ad7e0a4b3cab4d650f43db408caa84b76811fc0c" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.819300 5120 generic.go:358] "Generic (PLEG): container finished" podID="e3b00756-b775-4a1c-90b1-852a7f1712b7" containerID="e1d8c8ed095be6345cb8d0a5f794ec8f028217d079f94e974827a0b29f123d88" exitCode=0 Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.819626 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f" event={"ID":"e3b00756-b775-4a1c-90b1-852a7f1712b7","Type":"ContainerDied","Data":"e1d8c8ed095be6345cb8d0a5f794ec8f028217d079f94e974827a0b29f123d88"} Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.820366 5120 scope.go:117] "RemoveContainer" containerID="e1d8c8ed095be6345cb8d0a5f794ec8f028217d079f94e974827a0b29f123d88" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.878216 5120 scope.go:117] "RemoveContainer" containerID="0d4764f8cb2010a2330da137cf47631a4f97251072e950235bdfec5d58620ae3" Jan 22 12:15:29 crc kubenswrapper[5120]: E0122 12:15:29.895158 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d4764f8cb2010a2330da137cf47631a4f97251072e950235bdfec5d58620ae3\": container with ID starting with 0d4764f8cb2010a2330da137cf47631a4f97251072e950235bdfec5d58620ae3 not found: ID does not exist" containerID="0d4764f8cb2010a2330da137cf47631a4f97251072e950235bdfec5d58620ae3" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.895212 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d4764f8cb2010a2330da137cf47631a4f97251072e950235bdfec5d58620ae3"} err="failed to get container status \"0d4764f8cb2010a2330da137cf47631a4f97251072e950235bdfec5d58620ae3\": rpc error: code = NotFound desc = could not find container \"0d4764f8cb2010a2330da137cf47631a4f97251072e950235bdfec5d58620ae3\": container with ID starting with 0d4764f8cb2010a2330da137cf47631a4f97251072e950235bdfec5d58620ae3 not found: ID does not exist" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.929260 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-zgrdr"] Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.937618 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-zgrdr"] Jan 22 12:15:30 crc kubenswrapper[5120]: I0122 12:15:30.828075 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v" event={"ID":"f2b79a21-0ce0-4563-9ea9-d7cd1e19652d","Type":"ContainerStarted","Data":"c4db07130a5f11ed8e59402d43134e380a3362e5f92b473819c2fade40ee3899"} Jan 22 12:15:30 crc kubenswrapper[5120]: I0122 12:15:30.833882 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8" event={"ID":"d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2","Type":"ContainerStarted","Data":"97462354de95960163932658d34175f140ce0daa6f23dd586725b5b6345bd569"} Jan 22 12:15:30 crc kubenswrapper[5120]: I0122 12:15:30.838028 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789" event={"ID":"c5a872b8-950f-422a-9b1d-aaf761e5295c","Type":"ContainerStarted","Data":"274248b9acd2538341b1a08623b92f7d58fd2d4d7ad0c4de846150867a678587"} Jan 22 12:15:30 crc kubenswrapper[5120]: I0122 12:15:30.840287 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-48w6f" event={"ID":"a388a8ad-2606-4be5-9640-e8b11efa3daa","Type":"ContainerStarted","Data":"7dc973f07cf99aa6d3dc92d6eeaff63f2b111736c9320e7d4c19807b7015e888"} Jan 22 12:15:30 crc kubenswrapper[5120]: I0122 12:15:30.840344 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-48w6f" event={"ID":"a388a8ad-2606-4be5-9640-e8b11efa3daa","Type":"ContainerStarted","Data":"0f911a0b193c43478a45ab1d7dd2f2abdc46d8a07b4fab83c46b1df9c92fd318"} Jan 22 12:15:30 crc kubenswrapper[5120]: I0122 12:15:30.844182 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x" event={"ID":"9836015c-341f-44a4-a0b1-2d155148b264","Type":"ContainerStarted","Data":"40db0486309d939b49a3600e70216e8c57a040f1db69bfa2c84a534e01d3a271"} Jan 22 12:15:30 crc kubenswrapper[5120]: I0122 12:15:30.850474 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f" event={"ID":"e3b00756-b775-4a1c-90b1-852a7f1712b7","Type":"ContainerStarted","Data":"49ed05819c018c26b9738d4d36e4c05cabc3e62405b65c0e76d324c5160711d2"} Jan 22 12:15:30 crc kubenswrapper[5120]: I0122 12:15:30.872991 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-55bf8d5cb-48w6f" podStartSLOduration=2.8729696689999997 podStartE2EDuration="2.872969669s" podCreationTimestamp="2026-01-22 12:15:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 12:15:30.867519888 +0000 UTC m=+1665.611468229" watchObservedRunningTime="2026-01-22 12:15:30.872969669 +0000 UTC m=+1665.616918010" Jan 22 12:15:31 crc kubenswrapper[5120]: I0122 12:15:31.589399 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4812e83-6f17-4bad-8aaa-1521eb0b590f" path="/var/lib/kubelet/pods/f4812e83-6f17-4bad-8aaa-1521eb0b590f/volumes" Jan 22 12:15:31 crc kubenswrapper[5120]: I0122 12:15:31.861229 5120 generic.go:358] "Generic (PLEG): container finished" podID="f2b79a21-0ce0-4563-9ea9-d7cd1e19652d" containerID="c4db07130a5f11ed8e59402d43134e380a3362e5f92b473819c2fade40ee3899" exitCode=0 Jan 22 12:15:31 crc kubenswrapper[5120]: I0122 12:15:31.861292 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v" event={"ID":"f2b79a21-0ce0-4563-9ea9-d7cd1e19652d","Type":"ContainerDied","Data":"c4db07130a5f11ed8e59402d43134e380a3362e5f92b473819c2fade40ee3899"} Jan 22 12:15:31 crc kubenswrapper[5120]: I0122 12:15:31.862306 5120 scope.go:117] "RemoveContainer" containerID="9ec1c38b8bad47a50b0391be1aaf44b110a525113a7cbaedcbf781e01d53c413" Jan 22 12:15:31 crc kubenswrapper[5120]: I0122 12:15:31.863049 5120 scope.go:117] "RemoveContainer" containerID="c4db07130a5f11ed8e59402d43134e380a3362e5f92b473819c2fade40ee3899" Jan 22 12:15:31 crc kubenswrapper[5120]: E0122 12:15:31.863475 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v_service-telemetry(f2b79a21-0ce0-4563-9ea9-d7cd1e19652d)\"" pod="service-telemetry/default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v" podUID="f2b79a21-0ce0-4563-9ea9-d7cd1e19652d" Jan 22 12:15:31 crc kubenswrapper[5120]: I0122 12:15:31.864893 5120 generic.go:358] "Generic (PLEG): container finished" podID="d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2" containerID="97462354de95960163932658d34175f140ce0daa6f23dd586725b5b6345bd569" exitCode=0 Jan 22 12:15:31 crc kubenswrapper[5120]: I0122 12:15:31.865110 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8" event={"ID":"d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2","Type":"ContainerDied","Data":"97462354de95960163932658d34175f140ce0daa6f23dd586725b5b6345bd569"} Jan 22 12:15:31 crc kubenswrapper[5120]: I0122 12:15:31.865593 5120 scope.go:117] "RemoveContainer" containerID="97462354de95960163932658d34175f140ce0daa6f23dd586725b5b6345bd569" Jan 22 12:15:31 crc kubenswrapper[5120]: E0122 12:15:31.865880 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8_service-telemetry(d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2)\"" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8" podUID="d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2" Jan 22 12:15:31 crc kubenswrapper[5120]: I0122 12:15:31.873619 5120 generic.go:358] "Generic (PLEG): container finished" podID="c5a872b8-950f-422a-9b1d-aaf761e5295c" containerID="274248b9acd2538341b1a08623b92f7d58fd2d4d7ad0c4de846150867a678587" exitCode=0 Jan 22 12:15:31 crc kubenswrapper[5120]: I0122 12:15:31.873888 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789" event={"ID":"c5a872b8-950f-422a-9b1d-aaf761e5295c","Type":"ContainerDied","Data":"274248b9acd2538341b1a08623b92f7d58fd2d4d7ad0c4de846150867a678587"} Jan 22 12:15:31 crc kubenswrapper[5120]: I0122 12:15:31.874457 5120 scope.go:117] "RemoveContainer" containerID="274248b9acd2538341b1a08623b92f7d58fd2d4d7ad0c4de846150867a678587" Jan 22 12:15:31 crc kubenswrapper[5120]: E0122 12:15:31.874859 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789_service-telemetry(c5a872b8-950f-422a-9b1d-aaf761e5295c)\"" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789" podUID="c5a872b8-950f-422a-9b1d-aaf761e5295c" Jan 22 12:15:31 crc kubenswrapper[5120]: I0122 12:15:31.878396 5120 generic.go:358] "Generic (PLEG): container finished" podID="9836015c-341f-44a4-a0b1-2d155148b264" containerID="40db0486309d939b49a3600e70216e8c57a040f1db69bfa2c84a534e01d3a271" exitCode=0 Jan 22 12:15:31 crc kubenswrapper[5120]: I0122 12:15:31.878518 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x" event={"ID":"9836015c-341f-44a4-a0b1-2d155148b264","Type":"ContainerDied","Data":"40db0486309d939b49a3600e70216e8c57a040f1db69bfa2c84a534e01d3a271"} Jan 22 12:15:31 crc kubenswrapper[5120]: I0122 12:15:31.879112 5120 scope.go:117] "RemoveContainer" containerID="40db0486309d939b49a3600e70216e8c57a040f1db69bfa2c84a534e01d3a271" Jan 22 12:15:31 crc kubenswrapper[5120]: E0122 12:15:31.879432 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x_service-telemetry(9836015c-341f-44a4-a0b1-2d155148b264)\"" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x" podUID="9836015c-341f-44a4-a0b1-2d155148b264" Jan 22 12:15:31 crc kubenswrapper[5120]: I0122 12:15:31.881162 5120 generic.go:358] "Generic (PLEG): container finished" podID="e3b00756-b775-4a1c-90b1-852a7f1712b7" containerID="49ed05819c018c26b9738d4d36e4c05cabc3e62405b65c0e76d324c5160711d2" exitCode=0 Jan 22 12:15:31 crc kubenswrapper[5120]: I0122 12:15:31.881828 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f" event={"ID":"e3b00756-b775-4a1c-90b1-852a7f1712b7","Type":"ContainerDied","Data":"49ed05819c018c26b9738d4d36e4c05cabc3e62405b65c0e76d324c5160711d2"} Jan 22 12:15:31 crc kubenswrapper[5120]: I0122 12:15:31.882105 5120 scope.go:117] "RemoveContainer" containerID="49ed05819c018c26b9738d4d36e4c05cabc3e62405b65c0e76d324c5160711d2" Jan 22 12:15:31 crc kubenswrapper[5120]: E0122 12:15:31.882306 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f_service-telemetry(e3b00756-b775-4a1c-90b1-852a7f1712b7)\"" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f" podUID="e3b00756-b775-4a1c-90b1-852a7f1712b7" Jan 22 12:15:31 crc kubenswrapper[5120]: I0122 12:15:31.924523 5120 scope.go:117] "RemoveContainer" containerID="c6aeb92452de7be06be2214e00760877418176e7adeaa44cb0aebda9bf04c25b" Jan 22 12:15:31 crc kubenswrapper[5120]: I0122 12:15:31.994818 5120 scope.go:117] "RemoveContainer" containerID="d70e84c4805e3abc4485f6e976fabc66057dc851ff94b34902b82b744cc891a2" Jan 22 12:15:32 crc kubenswrapper[5120]: I0122 12:15:32.074926 5120 scope.go:117] "RemoveContainer" containerID="5f16f1d46c062cbc552acd91ad7e0a4b3cab4d650f43db408caa84b76811fc0c" Jan 22 12:15:32 crc kubenswrapper[5120]: I0122 12:15:32.124940 5120 scope.go:117] "RemoveContainer" containerID="e1d8c8ed095be6345cb8d0a5f794ec8f028217d079f94e974827a0b29f123d88" Jan 22 12:15:34 crc kubenswrapper[5120]: I0122 12:15:34.938325 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/qdr-test"] Jan 22 12:15:34 crc kubenswrapper[5120]: I0122 12:15:34.948474 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Jan 22 12:15:34 crc kubenswrapper[5120]: I0122 12:15:34.950673 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-selfsigned\"" Jan 22 12:15:34 crc kubenswrapper[5120]: I0122 12:15:34.951112 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"qdr-test-config\"" Jan 22 12:15:34 crc kubenswrapper[5120]: I0122 12:15:34.952319 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Jan 22 12:15:35 crc kubenswrapper[5120]: I0122 12:15:35.054919 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/17ccb7ef-92f9-4fe2-aeac-92f706339496-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"17ccb7ef-92f9-4fe2-aeac-92f706339496\") " pod="service-telemetry/qdr-test" Jan 22 12:15:35 crc kubenswrapper[5120]: I0122 12:15:35.055044 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r7kx\" (UniqueName: \"kubernetes.io/projected/17ccb7ef-92f9-4fe2-aeac-92f706339496-kube-api-access-5r7kx\") pod \"qdr-test\" (UID: \"17ccb7ef-92f9-4fe2-aeac-92f706339496\") " pod="service-telemetry/qdr-test" Jan 22 12:15:35 crc kubenswrapper[5120]: I0122 12:15:35.055132 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/17ccb7ef-92f9-4fe2-aeac-92f706339496-qdr-test-config\") pod \"qdr-test\" (UID: \"17ccb7ef-92f9-4fe2-aeac-92f706339496\") " pod="service-telemetry/qdr-test" Jan 22 12:15:35 crc kubenswrapper[5120]: I0122 12:15:35.156174 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5r7kx\" (UniqueName: \"kubernetes.io/projected/17ccb7ef-92f9-4fe2-aeac-92f706339496-kube-api-access-5r7kx\") pod \"qdr-test\" (UID: \"17ccb7ef-92f9-4fe2-aeac-92f706339496\") " pod="service-telemetry/qdr-test" Jan 22 12:15:35 crc kubenswrapper[5120]: I0122 12:15:35.156314 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/17ccb7ef-92f9-4fe2-aeac-92f706339496-qdr-test-config\") pod \"qdr-test\" (UID: \"17ccb7ef-92f9-4fe2-aeac-92f706339496\") " pod="service-telemetry/qdr-test" Jan 22 12:15:35 crc kubenswrapper[5120]: I0122 12:15:35.156390 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/17ccb7ef-92f9-4fe2-aeac-92f706339496-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"17ccb7ef-92f9-4fe2-aeac-92f706339496\") " pod="service-telemetry/qdr-test" Jan 22 12:15:35 crc kubenswrapper[5120]: I0122 12:15:35.157737 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/17ccb7ef-92f9-4fe2-aeac-92f706339496-qdr-test-config\") pod \"qdr-test\" (UID: \"17ccb7ef-92f9-4fe2-aeac-92f706339496\") " pod="service-telemetry/qdr-test" Jan 22 12:15:35 crc kubenswrapper[5120]: I0122 12:15:35.163647 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/17ccb7ef-92f9-4fe2-aeac-92f706339496-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"17ccb7ef-92f9-4fe2-aeac-92f706339496\") " pod="service-telemetry/qdr-test" Jan 22 12:15:35 crc kubenswrapper[5120]: I0122 12:15:35.176452 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r7kx\" (UniqueName: \"kubernetes.io/projected/17ccb7ef-92f9-4fe2-aeac-92f706339496-kube-api-access-5r7kx\") pod \"qdr-test\" (UID: \"17ccb7ef-92f9-4fe2-aeac-92f706339496\") " pod="service-telemetry/qdr-test" Jan 22 12:15:35 crc kubenswrapper[5120]: I0122 12:15:35.272256 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Jan 22 12:15:35 crc kubenswrapper[5120]: I0122 12:15:35.801435 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Jan 22 12:15:35 crc kubenswrapper[5120]: I0122 12:15:35.923282 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"17ccb7ef-92f9-4fe2-aeac-92f706339496","Type":"ContainerStarted","Data":"72ffffa56ff8da69c001f511b2eb49b0a1f748a037ea79c351dad8dcb565b92e"} Jan 22 12:15:42 crc kubenswrapper[5120]: I0122 12:15:42.572190 5120 scope.go:117] "RemoveContainer" containerID="274248b9acd2538341b1a08623b92f7d58fd2d4d7ad0c4de846150867a678587" Jan 22 12:15:45 crc kubenswrapper[5120]: I0122 12:15:45.586251 5120 scope.go:117] "RemoveContainer" containerID="97462354de95960163932658d34175f140ce0daa6f23dd586725b5b6345bd569" Jan 22 12:15:46 crc kubenswrapper[5120]: I0122 12:15:46.581515 5120 scope.go:117] "RemoveContainer" containerID="c4db07130a5f11ed8e59402d43134e380a3362e5f92b473819c2fade40ee3899" Jan 22 12:15:46 crc kubenswrapper[5120]: I0122 12:15:46.582091 5120 scope.go:117] "RemoveContainer" containerID="49ed05819c018c26b9738d4d36e4c05cabc3e62405b65c0e76d324c5160711d2" Jan 22 12:15:47 crc kubenswrapper[5120]: I0122 12:15:47.572350 5120 scope.go:117] "RemoveContainer" containerID="40db0486309d939b49a3600e70216e8c57a040f1db69bfa2c84a534e01d3a271" Jan 22 12:15:48 crc kubenswrapper[5120]: I0122 12:15:48.074586 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x" event={"ID":"9836015c-341f-44a4-a0b1-2d155148b264","Type":"ContainerStarted","Data":"febdf5e56d12592ccae563973e2be0d9b9fd0ff7ba6788f899660dbde3c33155"} Jan 22 12:15:48 crc kubenswrapper[5120]: I0122 12:15:48.079402 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f" event={"ID":"e3b00756-b775-4a1c-90b1-852a7f1712b7","Type":"ContainerStarted","Data":"ea31889c07661f32adbeaba68512715bc8f03db1e4ec070763ff42266e4261c8"} Jan 22 12:15:48 crc kubenswrapper[5120]: I0122 12:15:48.083563 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"17ccb7ef-92f9-4fe2-aeac-92f706339496","Type":"ContainerStarted","Data":"1d80ecf5c4bdd4bc9b734189011d52d2dc1dd42b636073d41a7d36349d60d91a"} Jan 22 12:15:48 crc kubenswrapper[5120]: I0122 12:15:48.089936 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v" event={"ID":"f2b79a21-0ce0-4563-9ea9-d7cd1e19652d","Type":"ContainerStarted","Data":"cf3cb73f1c400392061d7fdb348ff1ec801b22ba473c1785e9149b18c55b0c85"} Jan 22 12:15:48 crc kubenswrapper[5120]: I0122 12:15:48.095826 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8" event={"ID":"d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2","Type":"ContainerStarted","Data":"54228982152cf02430d6dc29001cb0c614b6034eb69f8ed3d66b0fa9ae786746"} Jan 22 12:15:48 crc kubenswrapper[5120]: I0122 12:15:48.098485 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789" event={"ID":"c5a872b8-950f-422a-9b1d-aaf761e5295c","Type":"ContainerStarted","Data":"eed3dfd2ff2681e3e2d219a0190a0b4a36ee15be4ad07e2da3a09e5042bacb0b"} Jan 22 12:15:48 crc kubenswrapper[5120]: I0122 12:15:48.261082 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/qdr-test" podStartSLOduration=2.893065712 podStartE2EDuration="14.26105986s" podCreationTimestamp="2026-01-22 12:15:34 +0000 UTC" firstStartedPulling="2026-01-22 12:15:35.816316832 +0000 UTC m=+1670.560265173" lastFinishedPulling="2026-01-22 12:15:47.18431098 +0000 UTC m=+1681.928259321" observedRunningTime="2026-01-22 12:15:48.236307539 +0000 UTC m=+1682.980255880" watchObservedRunningTime="2026-01-22 12:15:48.26105986 +0000 UTC m=+1683.005008201" Jan 22 12:15:48 crc kubenswrapper[5120]: I0122 12:15:48.594980 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/stf-smoketest-smoke1-xm4v9"] Jan 22 12:15:48 crc kubenswrapper[5120]: I0122 12:15:48.729013 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-xm4v9" Jan 22 12:15:48 crc kubenswrapper[5120]: I0122 12:15:48.728801 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-xm4v9"] Jan 22 12:15:48 crc kubenswrapper[5120]: I0122 12:15:48.735971 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-ceilometer-entrypoint-script\"" Jan 22 12:15:48 crc kubenswrapper[5120]: I0122 12:15:48.736198 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-healthcheck-log\"" Jan 22 12:15:48 crc kubenswrapper[5120]: I0122 12:15:48.736362 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-collectd-config\"" Jan 22 12:15:48 crc kubenswrapper[5120]: I0122 12:15:48.736550 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-sensubility-config\"" Jan 22 12:15:48 crc kubenswrapper[5120]: I0122 12:15:48.737109 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-ceilometer-publisher\"" Jan 22 12:15:48 crc kubenswrapper[5120]: I0122 12:15:48.737176 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-collectd-entrypoint-script\"" Jan 22 12:15:48 crc kubenswrapper[5120]: I0122 12:15:48.795964 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/1f7c177d-a587-4302-b084-7d4c780bf78b-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-xm4v9\" (UID: \"1f7c177d-a587-4302-b084-7d4c780bf78b\") " pod="service-telemetry/stf-smoketest-smoke1-xm4v9" Jan 22 12:15:48 crc kubenswrapper[5120]: I0122 12:15:48.796106 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xw8x\" (UniqueName: \"kubernetes.io/projected/1f7c177d-a587-4302-b084-7d4c780bf78b-kube-api-access-8xw8x\") pod \"stf-smoketest-smoke1-xm4v9\" (UID: \"1f7c177d-a587-4302-b084-7d4c780bf78b\") " pod="service-telemetry/stf-smoketest-smoke1-xm4v9" Jan 22 12:15:48 crc kubenswrapper[5120]: I0122 12:15:48.796282 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/1f7c177d-a587-4302-b084-7d4c780bf78b-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-xm4v9\" (UID: \"1f7c177d-a587-4302-b084-7d4c780bf78b\") " pod="service-telemetry/stf-smoketest-smoke1-xm4v9" Jan 22 12:15:48 crc kubenswrapper[5120]: I0122 12:15:48.796426 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/1f7c177d-a587-4302-b084-7d4c780bf78b-sensubility-config\") pod \"stf-smoketest-smoke1-xm4v9\" (UID: \"1f7c177d-a587-4302-b084-7d4c780bf78b\") " pod="service-telemetry/stf-smoketest-smoke1-xm4v9" Jan 22 12:15:48 crc kubenswrapper[5120]: I0122 12:15:48.796503 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/1f7c177d-a587-4302-b084-7d4c780bf78b-healthcheck-log\") pod \"stf-smoketest-smoke1-xm4v9\" (UID: \"1f7c177d-a587-4302-b084-7d4c780bf78b\") " pod="service-telemetry/stf-smoketest-smoke1-xm4v9" Jan 22 12:15:48 crc kubenswrapper[5120]: I0122 12:15:48.796569 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/1f7c177d-a587-4302-b084-7d4c780bf78b-ceilometer-publisher\") pod \"stf-smoketest-smoke1-xm4v9\" (UID: \"1f7c177d-a587-4302-b084-7d4c780bf78b\") " pod="service-telemetry/stf-smoketest-smoke1-xm4v9" Jan 22 12:15:48 crc kubenswrapper[5120]: I0122 12:15:48.796639 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/1f7c177d-a587-4302-b084-7d4c780bf78b-collectd-config\") pod \"stf-smoketest-smoke1-xm4v9\" (UID: \"1f7c177d-a587-4302-b084-7d4c780bf78b\") " pod="service-telemetry/stf-smoketest-smoke1-xm4v9" Jan 22 12:15:48 crc kubenswrapper[5120]: I0122 12:15:48.897988 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/1f7c177d-a587-4302-b084-7d4c780bf78b-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-xm4v9\" (UID: \"1f7c177d-a587-4302-b084-7d4c780bf78b\") " pod="service-telemetry/stf-smoketest-smoke1-xm4v9" Jan 22 12:15:48 crc kubenswrapper[5120]: I0122 12:15:48.898059 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/1f7c177d-a587-4302-b084-7d4c780bf78b-sensubility-config\") pod \"stf-smoketest-smoke1-xm4v9\" (UID: \"1f7c177d-a587-4302-b084-7d4c780bf78b\") " pod="service-telemetry/stf-smoketest-smoke1-xm4v9" Jan 22 12:15:48 crc kubenswrapper[5120]: I0122 12:15:48.898094 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/1f7c177d-a587-4302-b084-7d4c780bf78b-healthcheck-log\") pod \"stf-smoketest-smoke1-xm4v9\" (UID: \"1f7c177d-a587-4302-b084-7d4c780bf78b\") " pod="service-telemetry/stf-smoketest-smoke1-xm4v9" Jan 22 12:15:48 crc kubenswrapper[5120]: I0122 12:15:48.898119 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/1f7c177d-a587-4302-b084-7d4c780bf78b-ceilometer-publisher\") pod \"stf-smoketest-smoke1-xm4v9\" (UID: \"1f7c177d-a587-4302-b084-7d4c780bf78b\") " pod="service-telemetry/stf-smoketest-smoke1-xm4v9" Jan 22 12:15:48 crc kubenswrapper[5120]: I0122 12:15:48.898140 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/1f7c177d-a587-4302-b084-7d4c780bf78b-collectd-config\") pod \"stf-smoketest-smoke1-xm4v9\" (UID: \"1f7c177d-a587-4302-b084-7d4c780bf78b\") " pod="service-telemetry/stf-smoketest-smoke1-xm4v9" Jan 22 12:15:48 crc kubenswrapper[5120]: I0122 12:15:48.899256 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/1f7c177d-a587-4302-b084-7d4c780bf78b-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-xm4v9\" (UID: \"1f7c177d-a587-4302-b084-7d4c780bf78b\") " pod="service-telemetry/stf-smoketest-smoke1-xm4v9" Jan 22 12:15:48 crc kubenswrapper[5120]: I0122 12:15:48.899332 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/1f7c177d-a587-4302-b084-7d4c780bf78b-ceilometer-publisher\") pod \"stf-smoketest-smoke1-xm4v9\" (UID: \"1f7c177d-a587-4302-b084-7d4c780bf78b\") " pod="service-telemetry/stf-smoketest-smoke1-xm4v9" Jan 22 12:15:48 crc kubenswrapper[5120]: I0122 12:15:48.899354 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/1f7c177d-a587-4302-b084-7d4c780bf78b-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-xm4v9\" (UID: \"1f7c177d-a587-4302-b084-7d4c780bf78b\") " pod="service-telemetry/stf-smoketest-smoke1-xm4v9" Jan 22 12:15:48 crc kubenswrapper[5120]: I0122 12:15:48.899638 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/1f7c177d-a587-4302-b084-7d4c780bf78b-collectd-config\") pod \"stf-smoketest-smoke1-xm4v9\" (UID: \"1f7c177d-a587-4302-b084-7d4c780bf78b\") " pod="service-telemetry/stf-smoketest-smoke1-xm4v9" Jan 22 12:15:48 crc kubenswrapper[5120]: I0122 12:15:48.899494 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/1f7c177d-a587-4302-b084-7d4c780bf78b-healthcheck-log\") pod \"stf-smoketest-smoke1-xm4v9\" (UID: \"1f7c177d-a587-4302-b084-7d4c780bf78b\") " pod="service-telemetry/stf-smoketest-smoke1-xm4v9" Jan 22 12:15:48 crc kubenswrapper[5120]: I0122 12:15:48.899708 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8xw8x\" (UniqueName: \"kubernetes.io/projected/1f7c177d-a587-4302-b084-7d4c780bf78b-kube-api-access-8xw8x\") pod \"stf-smoketest-smoke1-xm4v9\" (UID: \"1f7c177d-a587-4302-b084-7d4c780bf78b\") " pod="service-telemetry/stf-smoketest-smoke1-xm4v9" Jan 22 12:15:48 crc kubenswrapper[5120]: I0122 12:15:48.899473 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/1f7c177d-a587-4302-b084-7d4c780bf78b-sensubility-config\") pod \"stf-smoketest-smoke1-xm4v9\" (UID: \"1f7c177d-a587-4302-b084-7d4c780bf78b\") " pod="service-telemetry/stf-smoketest-smoke1-xm4v9" Jan 22 12:15:48 crc kubenswrapper[5120]: I0122 12:15:48.899779 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/1f7c177d-a587-4302-b084-7d4c780bf78b-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-xm4v9\" (UID: \"1f7c177d-a587-4302-b084-7d4c780bf78b\") " pod="service-telemetry/stf-smoketest-smoke1-xm4v9" Jan 22 12:15:48 crc kubenswrapper[5120]: I0122 12:15:48.922789 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xw8x\" (UniqueName: \"kubernetes.io/projected/1f7c177d-a587-4302-b084-7d4c780bf78b-kube-api-access-8xw8x\") pod \"stf-smoketest-smoke1-xm4v9\" (UID: \"1f7c177d-a587-4302-b084-7d4c780bf78b\") " pod="service-telemetry/stf-smoketest-smoke1-xm4v9" Jan 22 12:15:49 crc kubenswrapper[5120]: I0122 12:15:49.033821 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/curl"] Jan 22 12:15:49 crc kubenswrapper[5120]: I0122 12:15:49.040446 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Jan 22 12:15:49 crc kubenswrapper[5120]: I0122 12:15:49.042986 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Jan 22 12:15:49 crc kubenswrapper[5120]: I0122 12:15:49.056294 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-xm4v9" Jan 22 12:15:49 crc kubenswrapper[5120]: I0122 12:15:49.105640 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8skx\" (UniqueName: \"kubernetes.io/projected/25098451-fba7-406a-8973-0df221d16bda-kube-api-access-t8skx\") pod \"curl\" (UID: \"25098451-fba7-406a-8973-0df221d16bda\") " pod="service-telemetry/curl" Jan 22 12:15:49 crc kubenswrapper[5120]: I0122 12:15:49.207614 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t8skx\" (UniqueName: \"kubernetes.io/projected/25098451-fba7-406a-8973-0df221d16bda-kube-api-access-t8skx\") pod \"curl\" (UID: \"25098451-fba7-406a-8973-0df221d16bda\") " pod="service-telemetry/curl" Jan 22 12:15:49 crc kubenswrapper[5120]: I0122 12:15:49.256170 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8skx\" (UniqueName: \"kubernetes.io/projected/25098451-fba7-406a-8973-0df221d16bda-kube-api-access-t8skx\") pod \"curl\" (UID: \"25098451-fba7-406a-8973-0df221d16bda\") " pod="service-telemetry/curl" Jan 22 12:15:49 crc kubenswrapper[5120]: I0122 12:15:49.372788 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Jan 22 12:15:49 crc kubenswrapper[5120]: I0122 12:15:49.586170 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-xm4v9"] Jan 22 12:15:49 crc kubenswrapper[5120]: W0122 12:15:49.592150 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f7c177d_a587_4302_b084_7d4c780bf78b.slice/crio-b64f1211c610b1cf9479ced1160965d605792e89e87573ef200bc541bf17a2de WatchSource:0}: Error finding container b64f1211c610b1cf9479ced1160965d605792e89e87573ef200bc541bf17a2de: Status 404 returned error can't find the container with id b64f1211c610b1cf9479ced1160965d605792e89e87573ef200bc541bf17a2de Jan 22 12:15:49 crc kubenswrapper[5120]: I0122 12:15:49.644401 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Jan 22 12:15:49 crc kubenswrapper[5120]: W0122 12:15:49.657590 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25098451_fba7_406a_8973_0df221d16bda.slice/crio-429c47a6e82370d8ab8953c5bb19669065342da0a2be7f47af0f06895d1c426c WatchSource:0}: Error finding container 429c47a6e82370d8ab8953c5bb19669065342da0a2be7f47af0f06895d1c426c: Status 404 returned error can't find the container with id 429c47a6e82370d8ab8953c5bb19669065342da0a2be7f47af0f06895d1c426c Jan 22 12:15:50 crc kubenswrapper[5120]: I0122 12:15:50.135471 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-xm4v9" event={"ID":"1f7c177d-a587-4302-b084-7d4c780bf78b","Type":"ContainerStarted","Data":"b64f1211c610b1cf9479ced1160965d605792e89e87573ef200bc541bf17a2de"} Jan 22 12:15:50 crc kubenswrapper[5120]: I0122 12:15:50.137297 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"25098451-fba7-406a-8973-0df221d16bda","Type":"ContainerStarted","Data":"429c47a6e82370d8ab8953c5bb19669065342da0a2be7f47af0f06895d1c426c"} Jan 22 12:16:00 crc kubenswrapper[5120]: I0122 12:16:00.136615 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29484736-5pvc5"] Jan 22 12:16:00 crc kubenswrapper[5120]: I0122 12:16:00.178915 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484736-5pvc5"] Jan 22 12:16:00 crc kubenswrapper[5120]: I0122 12:16:00.179106 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484736-5pvc5" Jan 22 12:16:00 crc kubenswrapper[5120]: I0122 12:16:00.437555 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 22 12:16:00 crc kubenswrapper[5120]: I0122 12:16:00.437911 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g2chw\"" Jan 22 12:16:00 crc kubenswrapper[5120]: I0122 12:16:00.438198 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 22 12:16:00 crc kubenswrapper[5120]: I0122 12:16:00.447103 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn8dl\" (UniqueName: \"kubernetes.io/projected/5256dd8c-d5c9-4b8f-8e6e-6fa5175741ca-kube-api-access-mn8dl\") pod \"auto-csr-approver-29484736-5pvc5\" (UID: \"5256dd8c-d5c9-4b8f-8e6e-6fa5175741ca\") " pod="openshift-infra/auto-csr-approver-29484736-5pvc5" Jan 22 12:16:00 crc kubenswrapper[5120]: I0122 12:16:00.549298 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mn8dl\" (UniqueName: \"kubernetes.io/projected/5256dd8c-d5c9-4b8f-8e6e-6fa5175741ca-kube-api-access-mn8dl\") pod \"auto-csr-approver-29484736-5pvc5\" (UID: \"5256dd8c-d5c9-4b8f-8e6e-6fa5175741ca\") " pod="openshift-infra/auto-csr-approver-29484736-5pvc5" Jan 22 12:16:00 crc kubenswrapper[5120]: I0122 12:16:00.572466 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mn8dl\" (UniqueName: \"kubernetes.io/projected/5256dd8c-d5c9-4b8f-8e6e-6fa5175741ca-kube-api-access-mn8dl\") pod \"auto-csr-approver-29484736-5pvc5\" (UID: \"5256dd8c-d5c9-4b8f-8e6e-6fa5175741ca\") " pod="openshift-infra/auto-csr-approver-29484736-5pvc5" Jan 22 12:16:00 crc kubenswrapper[5120]: I0122 12:16:00.763909 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484736-5pvc5" Jan 22 12:16:01 crc kubenswrapper[5120]: I0122 12:16:01.972637 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:16:01 crc kubenswrapper[5120]: I0122 12:16:01.973191 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:16:03 crc kubenswrapper[5120]: I0122 12:16:03.466493 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484736-5pvc5"] Jan 22 12:16:03 crc kubenswrapper[5120]: I0122 12:16:03.488556 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484736-5pvc5" event={"ID":"5256dd8c-d5c9-4b8f-8e6e-6fa5175741ca","Type":"ContainerStarted","Data":"564150a27c4da732632e24273e78390b7e240478afed6debebabd4375c23bfda"} Jan 22 12:16:04 crc kubenswrapper[5120]: I0122 12:16:04.503246 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-xm4v9" event={"ID":"1f7c177d-a587-4302-b084-7d4c780bf78b","Type":"ContainerStarted","Data":"ff38eff32aa3041858a79877ab066a7ce92fc6dc6d8cf6fccb024c7ec615617f"} Jan 22 12:16:04 crc kubenswrapper[5120]: I0122 12:16:04.512705 5120 generic.go:358] "Generic (PLEG): container finished" podID="25098451-fba7-406a-8973-0df221d16bda" containerID="ce66866e870ec5d7fb68c32efb8bbeee1c3238639c4a8df944eb20172469a38e" exitCode=0 Jan 22 12:16:04 crc kubenswrapper[5120]: I0122 12:16:04.512763 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"25098451-fba7-406a-8973-0df221d16bda","Type":"ContainerDied","Data":"ce66866e870ec5d7fb68c32efb8bbeee1c3238639c4a8df944eb20172469a38e"} Jan 22 12:16:05 crc kubenswrapper[5120]: I0122 12:16:05.524668 5120 generic.go:358] "Generic (PLEG): container finished" podID="5256dd8c-d5c9-4b8f-8e6e-6fa5175741ca" containerID="7dd5e09283dddb7bf8d7833ea438fcac480d32b32def3f4fc53d049422374e23" exitCode=0 Jan 22 12:16:05 crc kubenswrapper[5120]: I0122 12:16:05.525185 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484736-5pvc5" event={"ID":"5256dd8c-d5c9-4b8f-8e6e-6fa5175741ca","Type":"ContainerDied","Data":"7dd5e09283dddb7bf8d7833ea438fcac480d32b32def3f4fc53d049422374e23"} Jan 22 12:16:05 crc kubenswrapper[5120]: I0122 12:16:05.806421 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Jan 22 12:16:05 crc kubenswrapper[5120]: I0122 12:16:05.961320 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8skx\" (UniqueName: \"kubernetes.io/projected/25098451-fba7-406a-8973-0df221d16bda-kube-api-access-t8skx\") pod \"25098451-fba7-406a-8973-0df221d16bda\" (UID: \"25098451-fba7-406a-8973-0df221d16bda\") " Jan 22 12:16:05 crc kubenswrapper[5120]: I0122 12:16:05.969859 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25098451-fba7-406a-8973-0df221d16bda-kube-api-access-t8skx" (OuterVolumeSpecName: "kube-api-access-t8skx") pod "25098451-fba7-406a-8973-0df221d16bda" (UID: "25098451-fba7-406a-8973-0df221d16bda"). InnerVolumeSpecName "kube-api-access-t8skx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:16:06 crc kubenswrapper[5120]: I0122 12:16:06.003552 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_curl_25098451-fba7-406a-8973-0df221d16bda/curl/0.log" Jan 22 12:16:06 crc kubenswrapper[5120]: I0122 12:16:06.065343 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-t8skx\" (UniqueName: \"kubernetes.io/projected/25098451-fba7-406a-8973-0df221d16bda-kube-api-access-t8skx\") on node \"crc\" DevicePath \"\"" Jan 22 12:16:06 crc kubenswrapper[5120]: I0122 12:16:06.309384 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-694dc457d5-4xz7b_cb40028b-f955-4b75-b559-a1c4ec5c9256/prometheus-webhook-snmp/0.log" Jan 22 12:16:06 crc kubenswrapper[5120]: I0122 12:16:06.535157 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Jan 22 12:16:06 crc kubenswrapper[5120]: I0122 12:16:06.535233 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"25098451-fba7-406a-8973-0df221d16bda","Type":"ContainerDied","Data":"429c47a6e82370d8ab8953c5bb19669065342da0a2be7f47af0f06895d1c426c"} Jan 22 12:16:06 crc kubenswrapper[5120]: I0122 12:16:06.535304 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="429c47a6e82370d8ab8953c5bb19669065342da0a2be7f47af0f06895d1c426c" Jan 22 12:16:08 crc kubenswrapper[5120]: I0122 12:16:08.527057 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484736-5pvc5" Jan 22 12:16:08 crc kubenswrapper[5120]: I0122 12:16:08.572714 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484736-5pvc5" event={"ID":"5256dd8c-d5c9-4b8f-8e6e-6fa5175741ca","Type":"ContainerDied","Data":"564150a27c4da732632e24273e78390b7e240478afed6debebabd4375c23bfda"} Jan 22 12:16:08 crc kubenswrapper[5120]: I0122 12:16:08.572792 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="564150a27c4da732632e24273e78390b7e240478afed6debebabd4375c23bfda" Jan 22 12:16:08 crc kubenswrapper[5120]: I0122 12:16:08.572933 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484736-5pvc5" Jan 22 12:16:08 crc kubenswrapper[5120]: I0122 12:16:08.609564 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mn8dl\" (UniqueName: \"kubernetes.io/projected/5256dd8c-d5c9-4b8f-8e6e-6fa5175741ca-kube-api-access-mn8dl\") pod \"5256dd8c-d5c9-4b8f-8e6e-6fa5175741ca\" (UID: \"5256dd8c-d5c9-4b8f-8e6e-6fa5175741ca\") " Jan 22 12:16:08 crc kubenswrapper[5120]: I0122 12:16:08.620094 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5256dd8c-d5c9-4b8f-8e6e-6fa5175741ca-kube-api-access-mn8dl" (OuterVolumeSpecName: "kube-api-access-mn8dl") pod "5256dd8c-d5c9-4b8f-8e6e-6fa5175741ca" (UID: "5256dd8c-d5c9-4b8f-8e6e-6fa5175741ca"). InnerVolumeSpecName "kube-api-access-mn8dl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:16:08 crc kubenswrapper[5120]: I0122 12:16:08.711402 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mn8dl\" (UniqueName: \"kubernetes.io/projected/5256dd8c-d5c9-4b8f-8e6e-6fa5175741ca-kube-api-access-mn8dl\") on node \"crc\" DevicePath \"\"" Jan 22 12:16:09 crc kubenswrapper[5120]: I0122 12:16:09.605946 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29484730-z4qj9"] Jan 22 12:16:09 crc kubenswrapper[5120]: I0122 12:16:09.613339 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29484730-z4qj9"] Jan 22 12:16:11 crc kubenswrapper[5120]: I0122 12:16:11.586678 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86fa02fb-d5af-46f8-b19a-9af5fd7e5353" path="/var/lib/kubelet/pods/86fa02fb-d5af-46f8-b19a-9af5fd7e5353/volumes" Jan 22 12:16:11 crc kubenswrapper[5120]: I0122 12:16:11.604579 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-xm4v9" event={"ID":"1f7c177d-a587-4302-b084-7d4c780bf78b","Type":"ContainerStarted","Data":"98ccc2b8fec36bdedebf6260b3d6f179de9f39ff33596f42b42951ef0d56edb8"} Jan 22 12:16:11 crc kubenswrapper[5120]: I0122 12:16:11.626355 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/stf-smoketest-smoke1-xm4v9" podStartSLOduration=2.740024253 podStartE2EDuration="23.626333908s" podCreationTimestamp="2026-01-22 12:15:48 +0000 UTC" firstStartedPulling="2026-01-22 12:15:49.600284224 +0000 UTC m=+1684.344232565" lastFinishedPulling="2026-01-22 12:16:10.486593869 +0000 UTC m=+1705.230542220" observedRunningTime="2026-01-22 12:16:11.62597769 +0000 UTC m=+1706.369926051" watchObservedRunningTime="2026-01-22 12:16:11.626333908 +0000 UTC m=+1706.370282249" Jan 22 12:16:31 crc kubenswrapper[5120]: I0122 12:16:31.972760 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:16:31 crc kubenswrapper[5120]: I0122 12:16:31.973561 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:16:36 crc kubenswrapper[5120]: I0122 12:16:36.505455 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-694dc457d5-4xz7b_cb40028b-f955-4b75-b559-a1c4ec5c9256/prometheus-webhook-snmp/0.log" Jan 22 12:16:40 crc kubenswrapper[5120]: I0122 12:16:40.858423 5120 generic.go:358] "Generic (PLEG): container finished" podID="1f7c177d-a587-4302-b084-7d4c780bf78b" containerID="ff38eff32aa3041858a79877ab066a7ce92fc6dc6d8cf6fccb024c7ec615617f" exitCode=0 Jan 22 12:16:40 crc kubenswrapper[5120]: I0122 12:16:40.858817 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-xm4v9" event={"ID":"1f7c177d-a587-4302-b084-7d4c780bf78b","Type":"ContainerDied","Data":"ff38eff32aa3041858a79877ab066a7ce92fc6dc6d8cf6fccb024c7ec615617f"} Jan 22 12:16:40 crc kubenswrapper[5120]: I0122 12:16:40.859561 5120 scope.go:117] "RemoveContainer" containerID="ff38eff32aa3041858a79877ab066a7ce92fc6dc6d8cf6fccb024c7ec615617f" Jan 22 12:16:42 crc kubenswrapper[5120]: I0122 12:16:42.894040 5120 generic.go:358] "Generic (PLEG): container finished" podID="1f7c177d-a587-4302-b084-7d4c780bf78b" containerID="98ccc2b8fec36bdedebf6260b3d6f179de9f39ff33596f42b42951ef0d56edb8" exitCode=0 Jan 22 12:16:42 crc kubenswrapper[5120]: I0122 12:16:42.894115 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-xm4v9" event={"ID":"1f7c177d-a587-4302-b084-7d4c780bf78b","Type":"ContainerDied","Data":"98ccc2b8fec36bdedebf6260b3d6f179de9f39ff33596f42b42951ef0d56edb8"} Jan 22 12:16:44 crc kubenswrapper[5120]: I0122 12:16:44.175314 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-xm4v9" Jan 22 12:16:44 crc kubenswrapper[5120]: I0122 12:16:44.246235 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/1f7c177d-a587-4302-b084-7d4c780bf78b-collectd-config\") pod \"1f7c177d-a587-4302-b084-7d4c780bf78b\" (UID: \"1f7c177d-a587-4302-b084-7d4c780bf78b\") " Jan 22 12:16:44 crc kubenswrapper[5120]: I0122 12:16:44.246396 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/1f7c177d-a587-4302-b084-7d4c780bf78b-ceilometer-entrypoint-script\") pod \"1f7c177d-a587-4302-b084-7d4c780bf78b\" (UID: \"1f7c177d-a587-4302-b084-7d4c780bf78b\") " Jan 22 12:16:44 crc kubenswrapper[5120]: I0122 12:16:44.246471 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/1f7c177d-a587-4302-b084-7d4c780bf78b-collectd-entrypoint-script\") pod \"1f7c177d-a587-4302-b084-7d4c780bf78b\" (UID: \"1f7c177d-a587-4302-b084-7d4c780bf78b\") " Jan 22 12:16:44 crc kubenswrapper[5120]: I0122 12:16:44.248219 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xw8x\" (UniqueName: \"kubernetes.io/projected/1f7c177d-a587-4302-b084-7d4c780bf78b-kube-api-access-8xw8x\") pod \"1f7c177d-a587-4302-b084-7d4c780bf78b\" (UID: \"1f7c177d-a587-4302-b084-7d4c780bf78b\") " Jan 22 12:16:44 crc kubenswrapper[5120]: I0122 12:16:44.248337 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/1f7c177d-a587-4302-b084-7d4c780bf78b-healthcheck-log\") pod \"1f7c177d-a587-4302-b084-7d4c780bf78b\" (UID: \"1f7c177d-a587-4302-b084-7d4c780bf78b\") " Jan 22 12:16:44 crc kubenswrapper[5120]: I0122 12:16:44.248643 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/1f7c177d-a587-4302-b084-7d4c780bf78b-ceilometer-publisher\") pod \"1f7c177d-a587-4302-b084-7d4c780bf78b\" (UID: \"1f7c177d-a587-4302-b084-7d4c780bf78b\") " Jan 22 12:16:44 crc kubenswrapper[5120]: I0122 12:16:44.248932 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/1f7c177d-a587-4302-b084-7d4c780bf78b-sensubility-config\") pod \"1f7c177d-a587-4302-b084-7d4c780bf78b\" (UID: \"1f7c177d-a587-4302-b084-7d4c780bf78b\") " Jan 22 12:16:44 crc kubenswrapper[5120]: I0122 12:16:44.254337 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f7c177d-a587-4302-b084-7d4c780bf78b-kube-api-access-8xw8x" (OuterVolumeSpecName: "kube-api-access-8xw8x") pod "1f7c177d-a587-4302-b084-7d4c780bf78b" (UID: "1f7c177d-a587-4302-b084-7d4c780bf78b"). InnerVolumeSpecName "kube-api-access-8xw8x". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:16:44 crc kubenswrapper[5120]: I0122 12:16:44.266830 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f7c177d-a587-4302-b084-7d4c780bf78b-collectd-config" (OuterVolumeSpecName: "collectd-config") pod "1f7c177d-a587-4302-b084-7d4c780bf78b" (UID: "1f7c177d-a587-4302-b084-7d4c780bf78b"). InnerVolumeSpecName "collectd-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:16:44 crc kubenswrapper[5120]: I0122 12:16:44.267568 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f7c177d-a587-4302-b084-7d4c780bf78b-healthcheck-log" (OuterVolumeSpecName: "healthcheck-log") pod "1f7c177d-a587-4302-b084-7d4c780bf78b" (UID: "1f7c177d-a587-4302-b084-7d4c780bf78b"). InnerVolumeSpecName "healthcheck-log". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:16:44 crc kubenswrapper[5120]: I0122 12:16:44.267913 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f7c177d-a587-4302-b084-7d4c780bf78b-ceilometer-entrypoint-script" (OuterVolumeSpecName: "ceilometer-entrypoint-script") pod "1f7c177d-a587-4302-b084-7d4c780bf78b" (UID: "1f7c177d-a587-4302-b084-7d4c780bf78b"). InnerVolumeSpecName "ceilometer-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:16:44 crc kubenswrapper[5120]: I0122 12:16:44.269421 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f7c177d-a587-4302-b084-7d4c780bf78b-ceilometer-publisher" (OuterVolumeSpecName: "ceilometer-publisher") pod "1f7c177d-a587-4302-b084-7d4c780bf78b" (UID: "1f7c177d-a587-4302-b084-7d4c780bf78b"). InnerVolumeSpecName "ceilometer-publisher". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:16:44 crc kubenswrapper[5120]: I0122 12:16:44.274354 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f7c177d-a587-4302-b084-7d4c780bf78b-sensubility-config" (OuterVolumeSpecName: "sensubility-config") pod "1f7c177d-a587-4302-b084-7d4c780bf78b" (UID: "1f7c177d-a587-4302-b084-7d4c780bf78b"). InnerVolumeSpecName "sensubility-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:16:44 crc kubenswrapper[5120]: I0122 12:16:44.274651 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f7c177d-a587-4302-b084-7d4c780bf78b-collectd-entrypoint-script" (OuterVolumeSpecName: "collectd-entrypoint-script") pod "1f7c177d-a587-4302-b084-7d4c780bf78b" (UID: "1f7c177d-a587-4302-b084-7d4c780bf78b"). InnerVolumeSpecName "collectd-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:16:44 crc kubenswrapper[5120]: I0122 12:16:44.351333 5120 reconciler_common.go:299] "Volume detached for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/1f7c177d-a587-4302-b084-7d4c780bf78b-healthcheck-log\") on node \"crc\" DevicePath \"\"" Jan 22 12:16:44 crc kubenswrapper[5120]: I0122 12:16:44.351372 5120 reconciler_common.go:299] "Volume detached for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/1f7c177d-a587-4302-b084-7d4c780bf78b-ceilometer-publisher\") on node \"crc\" DevicePath \"\"" Jan 22 12:16:44 crc kubenswrapper[5120]: I0122 12:16:44.351385 5120 reconciler_common.go:299] "Volume detached for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/1f7c177d-a587-4302-b084-7d4c780bf78b-sensubility-config\") on node \"crc\" DevicePath \"\"" Jan 22 12:16:44 crc kubenswrapper[5120]: I0122 12:16:44.351396 5120 reconciler_common.go:299] "Volume detached for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/1f7c177d-a587-4302-b084-7d4c780bf78b-collectd-config\") on node \"crc\" DevicePath \"\"" Jan 22 12:16:44 crc kubenswrapper[5120]: I0122 12:16:44.351405 5120 reconciler_common.go:299] "Volume detached for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/1f7c177d-a587-4302-b084-7d4c780bf78b-ceilometer-entrypoint-script\") on node \"crc\" DevicePath \"\"" Jan 22 12:16:44 crc kubenswrapper[5120]: I0122 12:16:44.351414 5120 reconciler_common.go:299] "Volume detached for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/1f7c177d-a587-4302-b084-7d4c780bf78b-collectd-entrypoint-script\") on node \"crc\" DevicePath \"\"" Jan 22 12:16:44 crc kubenswrapper[5120]: I0122 12:16:44.351423 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8xw8x\" (UniqueName: \"kubernetes.io/projected/1f7c177d-a587-4302-b084-7d4c780bf78b-kube-api-access-8xw8x\") on node \"crc\" DevicePath \"\"" Jan 22 12:16:44 crc kubenswrapper[5120]: I0122 12:16:44.917342 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-xm4v9" Jan 22 12:16:44 crc kubenswrapper[5120]: I0122 12:16:44.918265 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-xm4v9" event={"ID":"1f7c177d-a587-4302-b084-7d4c780bf78b","Type":"ContainerDied","Data":"b64f1211c610b1cf9479ced1160965d605792e89e87573ef200bc541bf17a2de"} Jan 22 12:16:44 crc kubenswrapper[5120]: I0122 12:16:44.918381 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b64f1211c610b1cf9479ced1160965d605792e89e87573ef200bc541bf17a2de" Jan 22 12:16:46 crc kubenswrapper[5120]: I0122 12:16:46.317210 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-xm4v9_1f7c177d-a587-4302-b084-7d4c780bf78b/smoketest-collectd/0.log" Jan 22 12:16:46 crc kubenswrapper[5120]: I0122 12:16:46.640447 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-xm4v9_1f7c177d-a587-4302-b084-7d4c780bf78b/smoketest-ceilometer/0.log" Jan 22 12:16:47 crc kubenswrapper[5120]: I0122 12:16:47.004175 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-interconnect-55bf8d5cb-48w6f_a388a8ad-2606-4be5-9640-e8b11efa3daa/default-interconnect/0.log" Jan 22 12:16:47 crc kubenswrapper[5120]: I0122 12:16:47.329996 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8_d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2/bridge/2.log" Jan 22 12:16:47 crc kubenswrapper[5120]: I0122 12:16:47.676379 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8_d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2/sg-core/0.log" Jan 22 12:16:47 crc kubenswrapper[5120]: I0122 12:16:47.987358 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v_f2b79a21-0ce0-4563-9ea9-d7cd1e19652d/bridge/2.log" Jan 22 12:16:48 crc kubenswrapper[5120]: I0122 12:16:48.305485 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v_f2b79a21-0ce0-4563-9ea9-d7cd1e19652d/sg-core/0.log" Jan 22 12:16:48 crc kubenswrapper[5120]: I0122 12:16:48.630302 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f_e3b00756-b775-4a1c-90b1-852a7f1712b7/bridge/2.log" Jan 22 12:16:48 crc kubenswrapper[5120]: I0122 12:16:48.973360 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f_e3b00756-b775-4a1c-90b1-852a7f1712b7/sg-core/0.log" Jan 22 12:16:49 crc kubenswrapper[5120]: I0122 12:16:49.318107 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789_c5a872b8-950f-422a-9b1d-aaf761e5295c/bridge/2.log" Jan 22 12:16:49 crc kubenswrapper[5120]: I0122 12:16:49.612010 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789_c5a872b8-950f-422a-9b1d-aaf761e5295c/sg-core/0.log" Jan 22 12:16:49 crc kubenswrapper[5120]: I0122 12:16:49.995239 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x_9836015c-341f-44a4-a0b1-2d155148b264/bridge/2.log" Jan 22 12:16:50 crc kubenswrapper[5120]: I0122 12:16:50.343934 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x_9836015c-341f-44a4-a0b1-2d155148b264/sg-core/0.log" Jan 22 12:16:53 crc kubenswrapper[5120]: I0122 12:16:53.990296 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-84c66d88-wp5jc_8f9d3100-17a5-4c92-bf93-17c74efea49f/operator/0.log" Jan 22 12:16:54 crc kubenswrapper[5120]: I0122 12:16:54.278084 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_af3a73d7-3578-4530-9916-0c3613d55591/prometheus/0.log" Jan 22 12:16:54 crc kubenswrapper[5120]: I0122 12:16:54.576695 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elasticsearch-es-default-0_d6cd7adc-81ad-4b43-bd4c-7f48f1df35be/elasticsearch/0.log" Jan 22 12:16:54 crc kubenswrapper[5120]: I0122 12:16:54.900366 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-694dc457d5-4xz7b_cb40028b-f955-4b75-b559-a1c4ec5c9256/prometheus-webhook-snmp/0.log" Jan 22 12:16:55 crc kubenswrapper[5120]: I0122 12:16:55.199546 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_88fc8b5e-6a79-414c-8a72-7447f8db3056/alertmanager/0.log" Jan 22 12:17:01 crc kubenswrapper[5120]: I0122 12:17:01.395467 5120 scope.go:117] "RemoveContainer" containerID="e435702e7c696c62fc24675d08a9198377bd5a0c61f1adb503efe9265edbf5bd" Jan 22 12:17:01 crc kubenswrapper[5120]: I0122 12:17:01.972523 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:17:01 crc kubenswrapper[5120]: I0122 12:17:01.973090 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:17:01 crc kubenswrapper[5120]: I0122 12:17:01.973295 5120 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dq269" Jan 22 12:17:01 crc kubenswrapper[5120]: I0122 12:17:01.974334 5120 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"eda097a757f91e81d87c633c172aa5f1c9e7f79ccd5da35f6dbb6ffc692dc58d"} pod="openshift-machine-config-operator/machine-config-daemon-dq269" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 12:17:01 crc kubenswrapper[5120]: I0122 12:17:01.974526 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" containerID="cri-o://eda097a757f91e81d87c633c172aa5f1c9e7f79ccd5da35f6dbb6ffc692dc58d" gracePeriod=600 Jan 22 12:17:02 crc kubenswrapper[5120]: E0122 12:17:02.728578 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:17:03 crc kubenswrapper[5120]: I0122 12:17:03.102331 5120 generic.go:358] "Generic (PLEG): container finished" podID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerID="eda097a757f91e81d87c633c172aa5f1c9e7f79ccd5da35f6dbb6ffc692dc58d" exitCode=0 Jan 22 12:17:03 crc kubenswrapper[5120]: I0122 12:17:03.102382 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" event={"ID":"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9","Type":"ContainerDied","Data":"eda097a757f91e81d87c633c172aa5f1c9e7f79ccd5da35f6dbb6ffc692dc58d"} Jan 22 12:17:03 crc kubenswrapper[5120]: I0122 12:17:03.102480 5120 scope.go:117] "RemoveContainer" containerID="719354116d7ea0573a90aa1ae4bf7fd19ddeee3f2ea6145219b58e58618f132f" Jan 22 12:17:03 crc kubenswrapper[5120]: I0122 12:17:03.103570 5120 scope.go:117] "RemoveContainer" containerID="eda097a757f91e81d87c633c172aa5f1c9e7f79ccd5da35f6dbb6ffc692dc58d" Jan 22 12:17:03 crc kubenswrapper[5120]: E0122 12:17:03.104228 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:17:11 crc kubenswrapper[5120]: I0122 12:17:11.158132 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-69f575f8bc-9msdn_71c6d75c-6634-4017-92b9-487a57bcc47b/operator/0.log" Jan 22 12:17:15 crc kubenswrapper[5120]: I0122 12:17:15.578518 5120 scope.go:117] "RemoveContainer" containerID="eda097a757f91e81d87c633c172aa5f1c9e7f79ccd5da35f6dbb6ffc692dc58d" Jan 22 12:17:15 crc kubenswrapper[5120]: E0122 12:17:15.579321 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:17:15 crc kubenswrapper[5120]: I0122 12:17:15.582693 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-84c66d88-wp5jc_8f9d3100-17a5-4c92-bf93-17c74efea49f/operator/0.log" Jan 22 12:17:15 crc kubenswrapper[5120]: I0122 12:17:15.932721 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_qdr-test_17ccb7ef-92f9-4fe2-aeac-92f706339496/qdr/0.log" Jan 22 12:17:26 crc kubenswrapper[5120]: I0122 12:17:26.573504 5120 scope.go:117] "RemoveContainer" containerID="eda097a757f91e81d87c633c172aa5f1c9e7f79ccd5da35f6dbb6ffc692dc58d" Jan 22 12:17:26 crc kubenswrapper[5120]: E0122 12:17:26.574894 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:17:41 crc kubenswrapper[5120]: I0122 12:17:41.576651 5120 scope.go:117] "RemoveContainer" containerID="eda097a757f91e81d87c633c172aa5f1c9e7f79ccd5da35f6dbb6ffc692dc58d" Jan 22 12:17:41 crc kubenswrapper[5120]: E0122 12:17:41.578995 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:17:41 crc kubenswrapper[5120]: I0122 12:17:41.668631 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-2xb8g/must-gather-fcsxx"] Jan 22 12:17:41 crc kubenswrapper[5120]: I0122 12:17:41.669758 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1f7c177d-a587-4302-b084-7d4c780bf78b" containerName="smoketest-ceilometer" Jan 22 12:17:41 crc kubenswrapper[5120]: I0122 12:17:41.669839 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f7c177d-a587-4302-b084-7d4c780bf78b" containerName="smoketest-ceilometer" Jan 22 12:17:41 crc kubenswrapper[5120]: I0122 12:17:41.669910 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="25098451-fba7-406a-8973-0df221d16bda" containerName="curl" Jan 22 12:17:41 crc kubenswrapper[5120]: I0122 12:17:41.669982 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="25098451-fba7-406a-8973-0df221d16bda" containerName="curl" Jan 22 12:17:41 crc kubenswrapper[5120]: I0122 12:17:41.670072 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1f7c177d-a587-4302-b084-7d4c780bf78b" containerName="smoketest-collectd" Jan 22 12:17:41 crc kubenswrapper[5120]: I0122 12:17:41.670129 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f7c177d-a587-4302-b084-7d4c780bf78b" containerName="smoketest-collectd" Jan 22 12:17:41 crc kubenswrapper[5120]: I0122 12:17:41.670189 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5256dd8c-d5c9-4b8f-8e6e-6fa5175741ca" containerName="oc" Jan 22 12:17:41 crc kubenswrapper[5120]: I0122 12:17:41.670244 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="5256dd8c-d5c9-4b8f-8e6e-6fa5175741ca" containerName="oc" Jan 22 12:17:41 crc kubenswrapper[5120]: I0122 12:17:41.670416 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="5256dd8c-d5c9-4b8f-8e6e-6fa5175741ca" containerName="oc" Jan 22 12:17:41 crc kubenswrapper[5120]: I0122 12:17:41.670477 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="1f7c177d-a587-4302-b084-7d4c780bf78b" containerName="smoketest-ceilometer" Jan 22 12:17:41 crc kubenswrapper[5120]: I0122 12:17:41.670541 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="1f7c177d-a587-4302-b084-7d4c780bf78b" containerName="smoketest-collectd" Jan 22 12:17:41 crc kubenswrapper[5120]: I0122 12:17:41.670597 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="25098451-fba7-406a-8973-0df221d16bda" containerName="curl" Jan 22 12:17:41 crc kubenswrapper[5120]: I0122 12:17:41.676634 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2xb8g/must-gather-fcsxx" Jan 22 12:17:41 crc kubenswrapper[5120]: I0122 12:17:41.681066 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-must-gather-2xb8g\"/\"default-dockercfg-ldbdd\"" Jan 22 12:17:41 crc kubenswrapper[5120]: I0122 12:17:41.681612 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-2xb8g\"/\"kube-root-ca.crt\"" Jan 22 12:17:41 crc kubenswrapper[5120]: I0122 12:17:41.681822 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-2xb8g\"/\"openshift-service-ca.crt\"" Jan 22 12:17:41 crc kubenswrapper[5120]: I0122 12:17:41.683193 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-2xb8g/must-gather-fcsxx"] Jan 22 12:17:41 crc kubenswrapper[5120]: I0122 12:17:41.746927 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkrlq\" (UniqueName: \"kubernetes.io/projected/01f5b3a1-c30b-4a70-9096-28a4e3d15a54-kube-api-access-zkrlq\") pod \"must-gather-fcsxx\" (UID: \"01f5b3a1-c30b-4a70-9096-28a4e3d15a54\") " pod="openshift-must-gather-2xb8g/must-gather-fcsxx" Jan 22 12:17:41 crc kubenswrapper[5120]: I0122 12:17:41.747026 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/01f5b3a1-c30b-4a70-9096-28a4e3d15a54-must-gather-output\") pod \"must-gather-fcsxx\" (UID: \"01f5b3a1-c30b-4a70-9096-28a4e3d15a54\") " pod="openshift-must-gather-2xb8g/must-gather-fcsxx" Jan 22 12:17:41 crc kubenswrapper[5120]: I0122 12:17:41.848104 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/01f5b3a1-c30b-4a70-9096-28a4e3d15a54-must-gather-output\") pod \"must-gather-fcsxx\" (UID: \"01f5b3a1-c30b-4a70-9096-28a4e3d15a54\") " pod="openshift-must-gather-2xb8g/must-gather-fcsxx" Jan 22 12:17:41 crc kubenswrapper[5120]: I0122 12:17:41.848316 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zkrlq\" (UniqueName: \"kubernetes.io/projected/01f5b3a1-c30b-4a70-9096-28a4e3d15a54-kube-api-access-zkrlq\") pod \"must-gather-fcsxx\" (UID: \"01f5b3a1-c30b-4a70-9096-28a4e3d15a54\") " pod="openshift-must-gather-2xb8g/must-gather-fcsxx" Jan 22 12:17:41 crc kubenswrapper[5120]: I0122 12:17:41.848579 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/01f5b3a1-c30b-4a70-9096-28a4e3d15a54-must-gather-output\") pod \"must-gather-fcsxx\" (UID: \"01f5b3a1-c30b-4a70-9096-28a4e3d15a54\") " pod="openshift-must-gather-2xb8g/must-gather-fcsxx" Jan 22 12:17:41 crc kubenswrapper[5120]: I0122 12:17:41.879166 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkrlq\" (UniqueName: \"kubernetes.io/projected/01f5b3a1-c30b-4a70-9096-28a4e3d15a54-kube-api-access-zkrlq\") pod \"must-gather-fcsxx\" (UID: \"01f5b3a1-c30b-4a70-9096-28a4e3d15a54\") " pod="openshift-must-gather-2xb8g/must-gather-fcsxx" Jan 22 12:17:42 crc kubenswrapper[5120]: I0122 12:17:42.016823 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2xb8g/must-gather-fcsxx" Jan 22 12:17:42 crc kubenswrapper[5120]: I0122 12:17:42.461732 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-2xb8g/must-gather-fcsxx"] Jan 22 12:17:43 crc kubenswrapper[5120]: I0122 12:17:43.483236 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2xb8g/must-gather-fcsxx" event={"ID":"01f5b3a1-c30b-4a70-9096-28a4e3d15a54","Type":"ContainerStarted","Data":"78631c107310d4abd1880128712672ef8b12b3bdf1600a786fa65b1af64baa60"} Jan 22 12:17:51 crc kubenswrapper[5120]: I0122 12:17:51.332162 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4lzht_67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087/kube-multus/0.log" Jan 22 12:17:51 crc kubenswrapper[5120]: I0122 12:17:51.332230 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4lzht_67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087/kube-multus/0.log" Jan 22 12:17:51 crc kubenswrapper[5120]: I0122 12:17:51.345561 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 22 12:17:51 crc kubenswrapper[5120]: I0122 12:17:51.345562 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 22 12:17:51 crc kubenswrapper[5120]: I0122 12:17:51.564064 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2xb8g/must-gather-fcsxx" event={"ID":"01f5b3a1-c30b-4a70-9096-28a4e3d15a54","Type":"ContainerStarted","Data":"d386d5080c7fc835e26078b82ade2811a34f47c64b7bd93476027a8ab5c2517c"} Jan 22 12:17:52 crc kubenswrapper[5120]: I0122 12:17:52.573433 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2xb8g/must-gather-fcsxx" event={"ID":"01f5b3a1-c30b-4a70-9096-28a4e3d15a54","Type":"ContainerStarted","Data":"364c7bf6a6f388a3e4047bdae372a183325ae5db514b5ac5af7808cecc0fedc2"} Jan 22 12:17:52 crc kubenswrapper[5120]: I0122 12:17:52.592643 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-2xb8g/must-gather-fcsxx" podStartSLOduration=2.7365270539999997 podStartE2EDuration="11.592620019s" podCreationTimestamp="2026-01-22 12:17:41 +0000 UTC" firstStartedPulling="2026-01-22 12:17:42.478623914 +0000 UTC m=+1797.222572295" lastFinishedPulling="2026-01-22 12:17:51.334716919 +0000 UTC m=+1806.078665260" observedRunningTime="2026-01-22 12:17:52.586412478 +0000 UTC m=+1807.330360809" watchObservedRunningTime="2026-01-22 12:17:52.592620019 +0000 UTC m=+1807.336568360" Jan 22 12:17:53 crc kubenswrapper[5120]: I0122 12:17:53.572912 5120 scope.go:117] "RemoveContainer" containerID="eda097a757f91e81d87c633c172aa5f1c9e7f79ccd5da35f6dbb6ffc692dc58d" Jan 22 12:17:53 crc kubenswrapper[5120]: E0122 12:17:53.573366 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:18:00 crc kubenswrapper[5120]: I0122 12:18:00.138430 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29484738-tfzpk"] Jan 22 12:18:00 crc kubenswrapper[5120]: I0122 12:18:00.156092 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484738-tfzpk"] Jan 22 12:18:00 crc kubenswrapper[5120]: I0122 12:18:00.156226 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484738-tfzpk" Jan 22 12:18:00 crc kubenswrapper[5120]: I0122 12:18:00.172747 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 22 12:18:00 crc kubenswrapper[5120]: I0122 12:18:00.173057 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 22 12:18:00 crc kubenswrapper[5120]: I0122 12:18:00.175398 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g2chw\"" Jan 22 12:18:00 crc kubenswrapper[5120]: I0122 12:18:00.193624 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmjjr\" (UniqueName: \"kubernetes.io/projected/f97383a0-beb0-4ff9-a965-28e0e9b1addb-kube-api-access-rmjjr\") pod \"auto-csr-approver-29484738-tfzpk\" (UID: \"f97383a0-beb0-4ff9-a965-28e0e9b1addb\") " pod="openshift-infra/auto-csr-approver-29484738-tfzpk" Jan 22 12:18:00 crc kubenswrapper[5120]: I0122 12:18:00.295462 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rmjjr\" (UniqueName: \"kubernetes.io/projected/f97383a0-beb0-4ff9-a965-28e0e9b1addb-kube-api-access-rmjjr\") pod \"auto-csr-approver-29484738-tfzpk\" (UID: \"f97383a0-beb0-4ff9-a965-28e0e9b1addb\") " pod="openshift-infra/auto-csr-approver-29484738-tfzpk" Jan 22 12:18:00 crc kubenswrapper[5120]: I0122 12:18:00.320645 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmjjr\" (UniqueName: \"kubernetes.io/projected/f97383a0-beb0-4ff9-a965-28e0e9b1addb-kube-api-access-rmjjr\") pod \"auto-csr-approver-29484738-tfzpk\" (UID: \"f97383a0-beb0-4ff9-a965-28e0e9b1addb\") " pod="openshift-infra/auto-csr-approver-29484738-tfzpk" Jan 22 12:18:00 crc kubenswrapper[5120]: I0122 12:18:00.485162 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484738-tfzpk" Jan 22 12:18:00 crc kubenswrapper[5120]: I0122 12:18:00.926425 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484738-tfzpk"] Jan 22 12:18:01 crc kubenswrapper[5120]: I0122 12:18:01.646332 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484738-tfzpk" event={"ID":"f97383a0-beb0-4ff9-a965-28e0e9b1addb","Type":"ContainerStarted","Data":"3a7380241ccb5fb61fbba947c8b0dabf45e969af7a227e081182d8a4ca70e18b"} Jan 22 12:18:02 crc kubenswrapper[5120]: I0122 12:18:02.656812 5120 generic.go:358] "Generic (PLEG): container finished" podID="f97383a0-beb0-4ff9-a965-28e0e9b1addb" containerID="5130cc2c660ed67d488de9c861af0f840a6694cd424858313d97ed3425c416ca" exitCode=0 Jan 22 12:18:02 crc kubenswrapper[5120]: I0122 12:18:02.656951 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484738-tfzpk" event={"ID":"f97383a0-beb0-4ff9-a965-28e0e9b1addb","Type":"ContainerDied","Data":"5130cc2c660ed67d488de9c861af0f840a6694cd424858313d97ed3425c416ca"} Jan 22 12:18:03 crc kubenswrapper[5120]: I0122 12:18:03.931010 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484738-tfzpk" Jan 22 12:18:04 crc kubenswrapper[5120]: I0122 12:18:04.105046 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmjjr\" (UniqueName: \"kubernetes.io/projected/f97383a0-beb0-4ff9-a965-28e0e9b1addb-kube-api-access-rmjjr\") pod \"f97383a0-beb0-4ff9-a965-28e0e9b1addb\" (UID: \"f97383a0-beb0-4ff9-a965-28e0e9b1addb\") " Jan 22 12:18:04 crc kubenswrapper[5120]: I0122 12:18:04.115274 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f97383a0-beb0-4ff9-a965-28e0e9b1addb-kube-api-access-rmjjr" (OuterVolumeSpecName: "kube-api-access-rmjjr") pod "f97383a0-beb0-4ff9-a965-28e0e9b1addb" (UID: "f97383a0-beb0-4ff9-a965-28e0e9b1addb"). InnerVolumeSpecName "kube-api-access-rmjjr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:18:04 crc kubenswrapper[5120]: I0122 12:18:04.207992 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rmjjr\" (UniqueName: \"kubernetes.io/projected/f97383a0-beb0-4ff9-a965-28e0e9b1addb-kube-api-access-rmjjr\") on node \"crc\" DevicePath \"\"" Jan 22 12:18:04 crc kubenswrapper[5120]: I0122 12:18:04.682495 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484738-tfzpk" event={"ID":"f97383a0-beb0-4ff9-a965-28e0e9b1addb","Type":"ContainerDied","Data":"3a7380241ccb5fb61fbba947c8b0dabf45e969af7a227e081182d8a4ca70e18b"} Jan 22 12:18:04 crc kubenswrapper[5120]: I0122 12:18:04.682914 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a7380241ccb5fb61fbba947c8b0dabf45e969af7a227e081182d8a4ca70e18b" Jan 22 12:18:04 crc kubenswrapper[5120]: I0122 12:18:04.682544 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484738-tfzpk" Jan 22 12:18:04 crc kubenswrapper[5120]: I0122 12:18:04.945974 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-75ffdb6fcd-fhxb8_3cc31b0e-b225-470f-870b-f89666eae47b/control-plane-machine-set-operator/0.log" Jan 22 12:18:04 crc kubenswrapper[5120]: I0122 12:18:04.974602 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-755bb95488-x2rhp_dfeef834-363c-4dff-a170-acd203607c65/kube-rbac-proxy/0.log" Jan 22 12:18:04 crc kubenswrapper[5120]: I0122 12:18:04.988871 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-755bb95488-x2rhp_dfeef834-363c-4dff-a170-acd203607c65/machine-api-operator/0.log" Jan 22 12:18:05 crc kubenswrapper[5120]: I0122 12:18:05.009195 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29484732-pmd7b"] Jan 22 12:18:05 crc kubenswrapper[5120]: I0122 12:18:05.016914 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29484732-pmd7b"] Jan 22 12:18:05 crc kubenswrapper[5120]: I0122 12:18:05.581918 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2284d302-27de-4f84-9cd9-0b27dc76e987" path="/var/lib/kubelet/pods/2284d302-27de-4f84-9cd9-0b27dc76e987/volumes" Jan 22 12:18:07 crc kubenswrapper[5120]: I0122 12:18:07.572227 5120 scope.go:117] "RemoveContainer" containerID="eda097a757f91e81d87c633c172aa5f1c9e7f79ccd5da35f6dbb6ffc692dc58d" Jan 22 12:18:07 crc kubenswrapper[5120]: E0122 12:18:07.573084 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:18:10 crc kubenswrapper[5120]: I0122 12:18:10.432598 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858d87f86b-n6l95_56c64e8f-cd1a-468a-a526-ed7c1ff5ac88/cert-manager-controller/0.log" Jan 22 12:18:10 crc kubenswrapper[5120]: I0122 12:18:10.445236 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7dbf76d5c8-qc2vc_abe35b4f-1ae8-4e82-8b22-5f2d8fe01445/cert-manager-cainjector/0.log" Jan 22 12:18:10 crc kubenswrapper[5120]: I0122 12:18:10.459830 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-7894b5b9b4-r299r_fab5bde7-2cb3-4840-955e-6eec20d29b5d/cert-manager-webhook/0.log" Jan 22 12:18:16 crc kubenswrapper[5120]: I0122 12:18:16.445267 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-9bc85b4bf-kjb4b_6f74f225-731c-48b9-a98d-36a191b5ff41/prometheus-operator/0.log" Jan 22 12:18:16 crc kubenswrapper[5120]: I0122 12:18:16.462542 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7f6bdccb4-78zjb_2e68b911-b2b1-4a04-a86f-91742f22bad9/prometheus-operator-admission-webhook/0.log" Jan 22 12:18:16 crc kubenswrapper[5120]: I0122 12:18:16.476145 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7f6bdccb4-kw6h7_6924228f-579c-408a-8a40-b103b066446d/prometheus-operator-admission-webhook/0.log" Jan 22 12:18:16 crc kubenswrapper[5120]: I0122 12:18:16.496502 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-85c68dddb-s6759_da59fdd4-fe7a-4efd-b136-79a9b05d38b8/operator/0.log" Jan 22 12:18:16 crc kubenswrapper[5120]: I0122 12:18:16.508125 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-669c9f96b5-n9lhg_da376ee2-11ae-493e-9e4d-d8ac6fadfb53/perses-operator/0.log" Jan 22 12:18:22 crc kubenswrapper[5120]: I0122 12:18:22.387530 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87lvz_5915ccea-14c1-48c1-8e09-9cc508bb150e/extract/0.log" Jan 22 12:18:22 crc kubenswrapper[5120]: I0122 12:18:22.397696 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87lvz_5915ccea-14c1-48c1-8e09-9cc508bb150e/util/0.log" Jan 22 12:18:22 crc kubenswrapper[5120]: I0122 12:18:22.433947 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87lvz_5915ccea-14c1-48c1-8e09-9cc508bb150e/pull/0.log" Jan 22 12:18:22 crc kubenswrapper[5120]: I0122 12:18:22.448192 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbnzxn_6451a1e2-e63d-4a21-bab9-c97f9b2c9236/extract/0.log" Jan 22 12:18:22 crc kubenswrapper[5120]: I0122 12:18:22.457010 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbnzxn_6451a1e2-e63d-4a21-bab9-c97f9b2c9236/util/0.log" Jan 22 12:18:22 crc kubenswrapper[5120]: I0122 12:18:22.474032 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbnzxn_6451a1e2-e63d-4a21-bab9-c97f9b2c9236/pull/0.log" Jan 22 12:18:22 crc kubenswrapper[5120]: I0122 12:18:22.490068 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e86dw6_6ae07b37-44a2-4e47-abb9-5587cb866c3b/extract/0.log" Jan 22 12:18:22 crc kubenswrapper[5120]: I0122 12:18:22.511276 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e86dw6_6ae07b37-44a2-4e47-abb9-5587cb866c3b/util/0.log" Jan 22 12:18:22 crc kubenswrapper[5120]: I0122 12:18:22.518833 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e86dw6_6ae07b37-44a2-4e47-abb9-5587cb866c3b/pull/0.log" Jan 22 12:18:22 crc kubenswrapper[5120]: I0122 12:18:22.533611 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbn7b_04591ad2-b41c-420f-9328-a9ff515b4e1e/extract/0.log" Jan 22 12:18:22 crc kubenswrapper[5120]: I0122 12:18:22.540515 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbn7b_04591ad2-b41c-420f-9328-a9ff515b4e1e/util/0.log" Jan 22 12:18:22 crc kubenswrapper[5120]: I0122 12:18:22.551493 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbn7b_04591ad2-b41c-420f-9328-a9ff515b4e1e/pull/0.log" Jan 22 12:18:22 crc kubenswrapper[5120]: I0122 12:18:22.571481 5120 scope.go:117] "RemoveContainer" containerID="eda097a757f91e81d87c633c172aa5f1c9e7f79ccd5da35f6dbb6ffc692dc58d" Jan 22 12:18:22 crc kubenswrapper[5120]: E0122 12:18:22.571716 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:18:22 crc kubenswrapper[5120]: I0122 12:18:22.777602 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7xvj9_90af06b6-8b8b-48f3-bfb2-541ef60610fa/registry-server/0.log" Jan 22 12:18:22 crc kubenswrapper[5120]: I0122 12:18:22.784794 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7xvj9_90af06b6-8b8b-48f3-bfb2-541ef60610fa/extract-utilities/0.log" Jan 22 12:18:22 crc kubenswrapper[5120]: I0122 12:18:22.791775 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7xvj9_90af06b6-8b8b-48f3-bfb2-541ef60610fa/extract-content/0.log" Jan 22 12:18:23 crc kubenswrapper[5120]: I0122 12:18:23.067894 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-jck2s_3a14b1ee-af9d-4a1e-863f-c69c216c25d2/registry-server/0.log" Jan 22 12:18:23 crc kubenswrapper[5120]: I0122 12:18:23.073055 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-jck2s_3a14b1ee-af9d-4a1e-863f-c69c216c25d2/extract-utilities/0.log" Jan 22 12:18:23 crc kubenswrapper[5120]: I0122 12:18:23.079997 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-jck2s_3a14b1ee-af9d-4a1e-863f-c69c216c25d2/extract-content/0.log" Jan 22 12:18:23 crc kubenswrapper[5120]: I0122 12:18:23.094366 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-547dbd544d-nzw8g_abdba773-b95f-4d73-bcb5-d36526f8e13d/marketplace-operator/0.log" Jan 22 12:18:23 crc kubenswrapper[5120]: I0122 12:18:23.373525 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-srj7k_65ded1b5-0551-47c3-b32f-646318c3055a/registry-server/0.log" Jan 22 12:18:23 crc kubenswrapper[5120]: I0122 12:18:23.380651 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-srj7k_65ded1b5-0551-47c3-b32f-646318c3055a/extract-utilities/0.log" Jan 22 12:18:23 crc kubenswrapper[5120]: I0122 12:18:23.391773 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-srj7k_65ded1b5-0551-47c3-b32f-646318c3055a/extract-content/0.log" Jan 22 12:18:28 crc kubenswrapper[5120]: I0122 12:18:28.191207 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-9bc85b4bf-kjb4b_6f74f225-731c-48b9-a98d-36a191b5ff41/prometheus-operator/0.log" Jan 22 12:18:28 crc kubenswrapper[5120]: I0122 12:18:28.208836 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7f6bdccb4-78zjb_2e68b911-b2b1-4a04-a86f-91742f22bad9/prometheus-operator-admission-webhook/0.log" Jan 22 12:18:28 crc kubenswrapper[5120]: I0122 12:18:28.228210 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7f6bdccb4-kw6h7_6924228f-579c-408a-8a40-b103b066446d/prometheus-operator-admission-webhook/0.log" Jan 22 12:18:28 crc kubenswrapper[5120]: I0122 12:18:28.249601 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-85c68dddb-s6759_da59fdd4-fe7a-4efd-b136-79a9b05d38b8/operator/0.log" Jan 22 12:18:28 crc kubenswrapper[5120]: I0122 12:18:28.263357 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-669c9f96b5-n9lhg_da376ee2-11ae-493e-9e4d-d8ac6fadfb53/perses-operator/0.log" Jan 22 12:18:37 crc kubenswrapper[5120]: I0122 12:18:37.572241 5120 scope.go:117] "RemoveContainer" containerID="eda097a757f91e81d87c633c172aa5f1c9e7f79ccd5da35f6dbb6ffc692dc58d" Jan 22 12:18:37 crc kubenswrapper[5120]: E0122 12:18:37.572670 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:18:39 crc kubenswrapper[5120]: I0122 12:18:39.061913 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-9bc85b4bf-kjb4b_6f74f225-731c-48b9-a98d-36a191b5ff41/prometheus-operator/0.log" Jan 22 12:18:39 crc kubenswrapper[5120]: I0122 12:18:39.076104 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7f6bdccb4-78zjb_2e68b911-b2b1-4a04-a86f-91742f22bad9/prometheus-operator-admission-webhook/0.log" Jan 22 12:18:39 crc kubenswrapper[5120]: I0122 12:18:39.091395 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7f6bdccb4-kw6h7_6924228f-579c-408a-8a40-b103b066446d/prometheus-operator-admission-webhook/0.log" Jan 22 12:18:39 crc kubenswrapper[5120]: I0122 12:18:39.117787 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-85c68dddb-s6759_da59fdd4-fe7a-4efd-b136-79a9b05d38b8/operator/0.log" Jan 22 12:18:39 crc kubenswrapper[5120]: I0122 12:18:39.133034 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-669c9f96b5-n9lhg_da376ee2-11ae-493e-9e4d-d8ac6fadfb53/perses-operator/0.log" Jan 22 12:18:39 crc kubenswrapper[5120]: I0122 12:18:39.555520 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858d87f86b-n6l95_56c64e8f-cd1a-468a-a526-ed7c1ff5ac88/cert-manager-controller/0.log" Jan 22 12:18:39 crc kubenswrapper[5120]: I0122 12:18:39.568741 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7dbf76d5c8-qc2vc_abe35b4f-1ae8-4e82-8b22-5f2d8fe01445/cert-manager-cainjector/0.log" Jan 22 12:18:39 crc kubenswrapper[5120]: I0122 12:18:39.580524 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-7894b5b9b4-r299r_fab5bde7-2cb3-4840-955e-6eec20d29b5d/cert-manager-webhook/0.log" Jan 22 12:18:40 crc kubenswrapper[5120]: I0122 12:18:40.082670 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858d87f86b-n6l95_56c64e8f-cd1a-468a-a526-ed7c1ff5ac88/cert-manager-controller/0.log" Jan 22 12:18:40 crc kubenswrapper[5120]: I0122 12:18:40.094695 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7dbf76d5c8-qc2vc_abe35b4f-1ae8-4e82-8b22-5f2d8fe01445/cert-manager-cainjector/0.log" Jan 22 12:18:40 crc kubenswrapper[5120]: I0122 12:18:40.115176 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-7894b5b9b4-r299r_fab5bde7-2cb3-4840-955e-6eec20d29b5d/cert-manager-webhook/0.log" Jan 22 12:18:40 crc kubenswrapper[5120]: I0122 12:18:40.597186 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-75ffdb6fcd-fhxb8_3cc31b0e-b225-470f-870b-f89666eae47b/control-plane-machine-set-operator/0.log" Jan 22 12:18:40 crc kubenswrapper[5120]: I0122 12:18:40.610528 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-755bb95488-x2rhp_dfeef834-363c-4dff-a170-acd203607c65/kube-rbac-proxy/0.log" Jan 22 12:18:40 crc kubenswrapper[5120]: I0122 12:18:40.621172 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-755bb95488-x2rhp_dfeef834-363c-4dff-a170-acd203607c65/machine-api-operator/0.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.201634 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_88fc8b5e-6a79-414c-8a72-7447f8db3056/alertmanager/0.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.212760 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_88fc8b5e-6a79-414c-8a72-7447f8db3056/config-reloader/0.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.222024 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_88fc8b5e-6a79-414c-8a72-7447f8db3056/oauth-proxy/0.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.241460 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_88fc8b5e-6a79-414c-8a72-7447f8db3056/init-config-reloader/0.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.253314 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_curl_25098451-fba7-406a-8973-0df221d16bda/curl/0.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.265721 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789_c5a872b8-950f-422a-9b1d-aaf761e5295c/bridge/2.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.266216 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789_c5a872b8-950f-422a-9b1d-aaf761e5295c/bridge/1.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.272846 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789_c5a872b8-950f-422a-9b1d-aaf761e5295c/sg-core/0.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.286153 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f_e3b00756-b775-4a1c-90b1-852a7f1712b7/oauth-proxy/0.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.292317 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f_e3b00756-b775-4a1c-90b1-852a7f1712b7/bridge/2.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.292575 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f_e3b00756-b775-4a1c-90b1-852a7f1712b7/bridge/1.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.298095 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f_e3b00756-b775-4a1c-90b1-852a7f1712b7/sg-core/0.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.310481 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v_f2b79a21-0ce0-4563-9ea9-d7cd1e19652d/bridge/2.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.311384 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v_f2b79a21-0ce0-4563-9ea9-d7cd1e19652d/bridge/1.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.316645 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v_f2b79a21-0ce0-4563-9ea9-d7cd1e19652d/sg-core/0.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.329621 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8_d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2/oauth-proxy/0.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.340920 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8_d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2/bridge/1.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.342773 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8_d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2/bridge/2.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.347752 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8_d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2/sg-core/0.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.360703 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x_9836015c-341f-44a4-a0b1-2d155148b264/oauth-proxy/0.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.378402 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x_9836015c-341f-44a4-a0b1-2d155148b264/bridge/2.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.378670 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x_9836015c-341f-44a4-a0b1-2d155148b264/bridge/1.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.383930 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x_9836015c-341f-44a4-a0b1-2d155148b264/sg-core/0.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.402595 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-interconnect-55bf8d5cb-48w6f_a388a8ad-2606-4be5-9640-e8b11efa3daa/default-interconnect/0.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.413888 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-694dc457d5-4xz7b_cb40028b-f955-4b75-b559-a1c4ec5c9256/prometheus-webhook-snmp/0.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.467104 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elastic-operator-796f77fbdf-t9sbr_164c4d54-e519-4e1e-9e4b-3e2881312d55/manager/0.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.492357 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elasticsearch-es-default-0_d6cd7adc-81ad-4b43-bd4c-7f48f1df35be/elasticsearch/0.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.507977 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elasticsearch-es-default-0_d6cd7adc-81ad-4b43-bd4c-7f48f1df35be/elastic-internal-init-filesystem/0.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.514546 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elasticsearch-es-default-0_d6cd7adc-81ad-4b43-bd4c-7f48f1df35be/elastic-internal-suspend/0.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.532514 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_interconnect-operator-78b9bd8798-sd4wv_b6e8a299-2880-4236-8f8b-b6983db7ed96/interconnect-operator/0.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.548848 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_af3a73d7-3578-4530-9916-0c3613d55591/prometheus/0.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.558007 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_af3a73d7-3578-4530-9916-0c3613d55591/config-reloader/0.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.566032 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_af3a73d7-3578-4530-9916-0c3613d55591/oauth-proxy/0.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.574917 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_af3a73d7-3578-4530-9916-0c3613d55591/init-config-reloader/0.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.639885 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-2-build_aec972f4-74cd-403c-a0a5-2e56146e5aa2/docker-build/0.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.647397 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-2-build_aec972f4-74cd-403c-a0a5-2e56146e5aa2/git-clone/0.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.655554 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-2-build_aec972f4-74cd-403c-a0a5-2e56146e5aa2/manage-dockerfile/0.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.671631 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_qdr-test_17ccb7ef-92f9-4fe2-aeac-92f706339496/qdr/0.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.729653 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_22ca9e65-c1f9-472a-8795-d6806d6bf7e0/docker-build/0.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.736194 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_22ca9e65-c1f9-472a-8795-d6806d6bf7e0/git-clone/0.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.748329 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_22ca9e65-c1f9-472a-8795-d6806d6bf7e0/manage-dockerfile/0.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.995025 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-69f575f8bc-9msdn_71c6d75c-6634-4017-92b9-487a57bcc47b/operator/0.log" Jan 22 12:18:42 crc kubenswrapper[5120]: I0122 12:18:42.050360 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-2-build_76125ec9-7200-4d9a-8632-4f6a653c434c/docker-build/0.log" Jan 22 12:18:42 crc kubenswrapper[5120]: I0122 12:18:42.056704 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-2-build_76125ec9-7200-4d9a-8632-4f6a653c434c/git-clone/0.log" Jan 22 12:18:42 crc kubenswrapper[5120]: I0122 12:18:42.067764 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-2-build_76125ec9-7200-4d9a-8632-4f6a653c434c/manage-dockerfile/0.log" Jan 22 12:18:42 crc kubenswrapper[5120]: I0122 12:18:42.124222 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-2-build_4f1f5ecd-00ad-4747-b1eb-d701595508ad/docker-build/0.log" Jan 22 12:18:42 crc kubenswrapper[5120]: I0122 12:18:42.132103 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-2-build_4f1f5ecd-00ad-4747-b1eb-d701595508ad/git-clone/0.log" Jan 22 12:18:42 crc kubenswrapper[5120]: I0122 12:18:42.142533 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-2-build_4f1f5ecd-00ad-4747-b1eb-d701595508ad/manage-dockerfile/0.log" Jan 22 12:18:42 crc kubenswrapper[5120]: I0122 12:18:42.219049 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-2-build_379c9b40-0f89-404c-ba85-6b98c4a35a4f/docker-build/0.log" Jan 22 12:18:42 crc kubenswrapper[5120]: I0122 12:18:42.228367 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-2-build_379c9b40-0f89-404c-ba85-6b98c4a35a4f/git-clone/0.log" Jan 22 12:18:42 crc kubenswrapper[5120]: I0122 12:18:42.235313 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-2-build_379c9b40-0f89-404c-ba85-6b98c4a35a4f/manage-dockerfile/0.log" Jan 22 12:18:46 crc kubenswrapper[5120]: I0122 12:18:46.134790 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-84c66d88-wp5jc_8f9d3100-17a5-4c92-bf93-17c74efea49f/operator/0.log" Jan 22 12:18:46 crc kubenswrapper[5120]: I0122 12:18:46.165912 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-xm4v9_1f7c177d-a587-4302-b084-7d4c780bf78b/smoketest-collectd/0.log" Jan 22 12:18:46 crc kubenswrapper[5120]: I0122 12:18:46.174759 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-xm4v9_1f7c177d-a587-4302-b084-7d4c780bf78b/smoketest-ceilometer/0.log" Jan 22 12:18:47 crc kubenswrapper[5120]: I0122 12:18:47.609356 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4lzht_67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087/kube-multus/0.log" Jan 22 12:18:47 crc kubenswrapper[5120]: I0122 12:18:47.610849 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4lzht_67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087/kube-multus/1.log" Jan 22 12:18:47 crc kubenswrapper[5120]: I0122 12:18:47.625228 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-rg989_97df0621-ddba-4462-8134-59bc671c7351/kube-multus-additional-cni-plugins/0.log" Jan 22 12:18:47 crc kubenswrapper[5120]: I0122 12:18:47.637631 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-rg989_97df0621-ddba-4462-8134-59bc671c7351/egress-router-binary-copy/0.log" Jan 22 12:18:47 crc kubenswrapper[5120]: I0122 12:18:47.643015 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-rg989_97df0621-ddba-4462-8134-59bc671c7351/cni-plugins/0.log" Jan 22 12:18:47 crc kubenswrapper[5120]: I0122 12:18:47.652521 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-rg989_97df0621-ddba-4462-8134-59bc671c7351/bond-cni-plugin/0.log" Jan 22 12:18:47 crc kubenswrapper[5120]: I0122 12:18:47.659817 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-rg989_97df0621-ddba-4462-8134-59bc671c7351/routeoverride-cni/0.log" Jan 22 12:18:47 crc kubenswrapper[5120]: I0122 12:18:47.669074 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-rg989_97df0621-ddba-4462-8134-59bc671c7351/whereabouts-cni-bincopy/0.log" Jan 22 12:18:47 crc kubenswrapper[5120]: I0122 12:18:47.676911 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-rg989_97df0621-ddba-4462-8134-59bc671c7351/whereabouts-cni/0.log" Jan 22 12:18:47 crc kubenswrapper[5120]: I0122 12:18:47.693773 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-69db94689b-dp8rm_da2b1465-54c1-4a7d-8cb6-755b28e448b8/multus-admission-controller/0.log" Jan 22 12:18:47 crc kubenswrapper[5120]: I0122 12:18:47.703862 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-69db94689b-dp8rm_da2b1465-54c1-4a7d-8cb6-755b28e448b8/kube-rbac-proxy/0.log" Jan 22 12:18:47 crc kubenswrapper[5120]: I0122 12:18:47.735848 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-ldwx4_dababdca-8afb-452f-865f-54de3aec21d9/network-metrics-daemon/0.log" Jan 22 12:18:47 crc kubenswrapper[5120]: I0122 12:18:47.742539 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-ldwx4_dababdca-8afb-452f-865f-54de3aec21d9/kube-rbac-proxy/0.log" Jan 22 12:18:49 crc kubenswrapper[5120]: I0122 12:18:49.571835 5120 scope.go:117] "RemoveContainer" containerID="eda097a757f91e81d87c633c172aa5f1c9e7f79ccd5da35f6dbb6ffc692dc58d" Jan 22 12:18:49 crc kubenswrapper[5120]: E0122 12:18:49.572242 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:19:01 crc kubenswrapper[5120]: I0122 12:19:01.576606 5120 scope.go:117] "RemoveContainer" containerID="eda097a757f91e81d87c633c172aa5f1c9e7f79ccd5da35f6dbb6ffc692dc58d" Jan 22 12:19:01 crc kubenswrapper[5120]: E0122 12:19:01.578205 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:19:01 crc kubenswrapper[5120]: I0122 12:19:01.590281 5120 scope.go:117] "RemoveContainer" containerID="afab18be716ae606d212e93ff4cb99381fd77d17295864dd09555b0262bbf573" Jan 22 12:19:13 crc kubenswrapper[5120]: I0122 12:19:13.572972 5120 scope.go:117] "RemoveContainer" containerID="eda097a757f91e81d87c633c172aa5f1c9e7f79ccd5da35f6dbb6ffc692dc58d" Jan 22 12:19:13 crc kubenswrapper[5120]: E0122 12:19:13.573938 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:19:25 crc kubenswrapper[5120]: I0122 12:19:25.575690 5120 scope.go:117] "RemoveContainer" containerID="eda097a757f91e81d87c633c172aa5f1c9e7f79ccd5da35f6dbb6ffc692dc58d" Jan 22 12:19:25 crc kubenswrapper[5120]: E0122 12:19:25.576631 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:19:40 crc kubenswrapper[5120]: I0122 12:19:40.572423 5120 scope.go:117] "RemoveContainer" containerID="eda097a757f91e81d87c633c172aa5f1c9e7f79ccd5da35f6dbb6ffc692dc58d" Jan 22 12:19:40 crc kubenswrapper[5120]: E0122 12:19:40.574098 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:19:53 crc kubenswrapper[5120]: I0122 12:19:53.572537 5120 scope.go:117] "RemoveContainer" containerID="eda097a757f91e81d87c633c172aa5f1c9e7f79ccd5da35f6dbb6ffc692dc58d" Jan 22 12:19:53 crc kubenswrapper[5120]: E0122 12:19:53.573933 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:20:00 crc kubenswrapper[5120]: I0122 12:20:00.137357 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29484740-pq7hx"] Jan 22 12:20:00 crc kubenswrapper[5120]: I0122 12:20:00.139163 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f97383a0-beb0-4ff9-a965-28e0e9b1addb" containerName="oc" Jan 22 12:20:00 crc kubenswrapper[5120]: I0122 12:20:00.139181 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="f97383a0-beb0-4ff9-a965-28e0e9b1addb" containerName="oc" Jan 22 12:20:00 crc kubenswrapper[5120]: I0122 12:20:00.139352 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="f97383a0-beb0-4ff9-a965-28e0e9b1addb" containerName="oc" Jan 22 12:20:00 crc kubenswrapper[5120]: I0122 12:20:00.170194 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484740-pq7hx"] Jan 22 12:20:00 crc kubenswrapper[5120]: I0122 12:20:00.170356 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484740-pq7hx" Jan 22 12:20:00 crc kubenswrapper[5120]: I0122 12:20:00.172783 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g2chw\"" Jan 22 12:20:00 crc kubenswrapper[5120]: I0122 12:20:00.173739 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 22 12:20:00 crc kubenswrapper[5120]: I0122 12:20:00.173868 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 22 12:20:00 crc kubenswrapper[5120]: I0122 12:20:00.288584 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk6x5\" (UniqueName: \"kubernetes.io/projected/6609faf3-2234-4edf-96b2-132b3e0c23c4-kube-api-access-xk6x5\") pod \"auto-csr-approver-29484740-pq7hx\" (UID: \"6609faf3-2234-4edf-96b2-132b3e0c23c4\") " pod="openshift-infra/auto-csr-approver-29484740-pq7hx" Jan 22 12:20:00 crc kubenswrapper[5120]: I0122 12:20:00.390710 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xk6x5\" (UniqueName: \"kubernetes.io/projected/6609faf3-2234-4edf-96b2-132b3e0c23c4-kube-api-access-xk6x5\") pod \"auto-csr-approver-29484740-pq7hx\" (UID: \"6609faf3-2234-4edf-96b2-132b3e0c23c4\") " pod="openshift-infra/auto-csr-approver-29484740-pq7hx" Jan 22 12:20:00 crc kubenswrapper[5120]: I0122 12:20:00.435028 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xk6x5\" (UniqueName: \"kubernetes.io/projected/6609faf3-2234-4edf-96b2-132b3e0c23c4-kube-api-access-xk6x5\") pod \"auto-csr-approver-29484740-pq7hx\" (UID: \"6609faf3-2234-4edf-96b2-132b3e0c23c4\") " pod="openshift-infra/auto-csr-approver-29484740-pq7hx" Jan 22 12:20:00 crc kubenswrapper[5120]: I0122 12:20:00.492535 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484740-pq7hx" Jan 22 12:20:00 crc kubenswrapper[5120]: I0122 12:20:00.777012 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484740-pq7hx"] Jan 22 12:20:00 crc kubenswrapper[5120]: I0122 12:20:00.881319 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484740-pq7hx" event={"ID":"6609faf3-2234-4edf-96b2-132b3e0c23c4","Type":"ContainerStarted","Data":"1dbc83eb8b60b8c281711b1ff11c4c1df7346ea7be55f0b03d29a0db49d9cf67"} Jan 22 12:20:02 crc kubenswrapper[5120]: I0122 12:20:02.904663 5120 generic.go:358] "Generic (PLEG): container finished" podID="6609faf3-2234-4edf-96b2-132b3e0c23c4" containerID="be0e7176f01a842ccbd6627161b56398b3ffe33051efd8876db22a192b4801d2" exitCode=0 Jan 22 12:20:02 crc kubenswrapper[5120]: I0122 12:20:02.904730 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484740-pq7hx" event={"ID":"6609faf3-2234-4edf-96b2-132b3e0c23c4","Type":"ContainerDied","Data":"be0e7176f01a842ccbd6627161b56398b3ffe33051efd8876db22a192b4801d2"} Jan 22 12:20:04 crc kubenswrapper[5120]: I0122 12:20:04.191620 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484740-pq7hx" Jan 22 12:20:04 crc kubenswrapper[5120]: I0122 12:20:04.257412 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xk6x5\" (UniqueName: \"kubernetes.io/projected/6609faf3-2234-4edf-96b2-132b3e0c23c4-kube-api-access-xk6x5\") pod \"6609faf3-2234-4edf-96b2-132b3e0c23c4\" (UID: \"6609faf3-2234-4edf-96b2-132b3e0c23c4\") " Jan 22 12:20:04 crc kubenswrapper[5120]: I0122 12:20:04.266635 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6609faf3-2234-4edf-96b2-132b3e0c23c4-kube-api-access-xk6x5" (OuterVolumeSpecName: "kube-api-access-xk6x5") pod "6609faf3-2234-4edf-96b2-132b3e0c23c4" (UID: "6609faf3-2234-4edf-96b2-132b3e0c23c4"). InnerVolumeSpecName "kube-api-access-xk6x5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:20:04 crc kubenswrapper[5120]: I0122 12:20:04.359413 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xk6x5\" (UniqueName: \"kubernetes.io/projected/6609faf3-2234-4edf-96b2-132b3e0c23c4-kube-api-access-xk6x5\") on node \"crc\" DevicePath \"\"" Jan 22 12:20:04 crc kubenswrapper[5120]: I0122 12:20:04.571923 5120 scope.go:117] "RemoveContainer" containerID="eda097a757f91e81d87c633c172aa5f1c9e7f79ccd5da35f6dbb6ffc692dc58d" Jan 22 12:20:04 crc kubenswrapper[5120]: E0122 12:20:04.572685 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:20:04 crc kubenswrapper[5120]: I0122 12:20:04.931571 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484740-pq7hx" event={"ID":"6609faf3-2234-4edf-96b2-132b3e0c23c4","Type":"ContainerDied","Data":"1dbc83eb8b60b8c281711b1ff11c4c1df7346ea7be55f0b03d29a0db49d9cf67"} Jan 22 12:20:04 crc kubenswrapper[5120]: I0122 12:20:04.932098 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1dbc83eb8b60b8c281711b1ff11c4c1df7346ea7be55f0b03d29a0db49d9cf67" Jan 22 12:20:04 crc kubenswrapper[5120]: I0122 12:20:04.931605 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484740-pq7hx" Jan 22 12:20:05 crc kubenswrapper[5120]: I0122 12:20:05.270227 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29484734-7jmnm"] Jan 22 12:20:05 crc kubenswrapper[5120]: I0122 12:20:05.279668 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29484734-7jmnm"] Jan 22 12:20:05 crc kubenswrapper[5120]: I0122 12:20:05.590077 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c1b3bc9-3782-474e-a90c-86f0ba86fa6a" path="/var/lib/kubelet/pods/2c1b3bc9-3782-474e-a90c-86f0ba86fa6a/volumes" Jan 22 12:20:16 crc kubenswrapper[5120]: I0122 12:20:16.572589 5120 scope.go:117] "RemoveContainer" containerID="eda097a757f91e81d87c633c172aa5f1c9e7f79ccd5da35f6dbb6ffc692dc58d" Jan 22 12:20:16 crc kubenswrapper[5120]: E0122 12:20:16.574214 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:20:28 crc kubenswrapper[5120]: I0122 12:20:28.574707 5120 scope.go:117] "RemoveContainer" containerID="eda097a757f91e81d87c633c172aa5f1c9e7f79ccd5da35f6dbb6ffc692dc58d" Jan 22 12:20:28 crc kubenswrapper[5120]: E0122 12:20:28.576808 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:20:41 crc kubenswrapper[5120]: I0122 12:20:41.571943 5120 scope.go:117] "RemoveContainer" containerID="eda097a757f91e81d87c633c172aa5f1c9e7f79ccd5da35f6dbb6ffc692dc58d" Jan 22 12:20:41 crc kubenswrapper[5120]: E0122 12:20:41.573030 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:20:54 crc kubenswrapper[5120]: I0122 12:20:54.038818 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cd8qm"] Jan 22 12:20:54 crc kubenswrapper[5120]: I0122 12:20:54.040874 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6609faf3-2234-4edf-96b2-132b3e0c23c4" containerName="oc" Jan 22 12:20:54 crc kubenswrapper[5120]: I0122 12:20:54.040889 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="6609faf3-2234-4edf-96b2-132b3e0c23c4" containerName="oc" Jan 22 12:20:54 crc kubenswrapper[5120]: I0122 12:20:54.041087 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="6609faf3-2234-4edf-96b2-132b3e0c23c4" containerName="oc" Jan 22 12:20:54 crc kubenswrapper[5120]: I0122 12:20:54.047347 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cd8qm" Jan 22 12:20:54 crc kubenswrapper[5120]: I0122 12:20:54.071305 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cd8qm"] Jan 22 12:20:54 crc kubenswrapper[5120]: I0122 12:20:54.169564 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95d21ef3-45db-4786-bb22-1a4660b26e98-utilities\") pod \"redhat-operators-cd8qm\" (UID: \"95d21ef3-45db-4786-bb22-1a4660b26e98\") " pod="openshift-marketplace/redhat-operators-cd8qm" Jan 22 12:20:54 crc kubenswrapper[5120]: I0122 12:20:54.169697 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcw96\" (UniqueName: \"kubernetes.io/projected/95d21ef3-45db-4786-bb22-1a4660b26e98-kube-api-access-xcw96\") pod \"redhat-operators-cd8qm\" (UID: \"95d21ef3-45db-4786-bb22-1a4660b26e98\") " pod="openshift-marketplace/redhat-operators-cd8qm" Jan 22 12:20:54 crc kubenswrapper[5120]: I0122 12:20:54.169816 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95d21ef3-45db-4786-bb22-1a4660b26e98-catalog-content\") pod \"redhat-operators-cd8qm\" (UID: \"95d21ef3-45db-4786-bb22-1a4660b26e98\") " pod="openshift-marketplace/redhat-operators-cd8qm" Jan 22 12:20:54 crc kubenswrapper[5120]: I0122 12:20:54.271657 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95d21ef3-45db-4786-bb22-1a4660b26e98-utilities\") pod \"redhat-operators-cd8qm\" (UID: \"95d21ef3-45db-4786-bb22-1a4660b26e98\") " pod="openshift-marketplace/redhat-operators-cd8qm" Jan 22 12:20:54 crc kubenswrapper[5120]: I0122 12:20:54.271744 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xcw96\" (UniqueName: \"kubernetes.io/projected/95d21ef3-45db-4786-bb22-1a4660b26e98-kube-api-access-xcw96\") pod \"redhat-operators-cd8qm\" (UID: \"95d21ef3-45db-4786-bb22-1a4660b26e98\") " pod="openshift-marketplace/redhat-operators-cd8qm" Jan 22 12:20:54 crc kubenswrapper[5120]: I0122 12:20:54.271822 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95d21ef3-45db-4786-bb22-1a4660b26e98-catalog-content\") pod \"redhat-operators-cd8qm\" (UID: \"95d21ef3-45db-4786-bb22-1a4660b26e98\") " pod="openshift-marketplace/redhat-operators-cd8qm" Jan 22 12:20:54 crc kubenswrapper[5120]: I0122 12:20:54.272428 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95d21ef3-45db-4786-bb22-1a4660b26e98-utilities\") pod \"redhat-operators-cd8qm\" (UID: \"95d21ef3-45db-4786-bb22-1a4660b26e98\") " pod="openshift-marketplace/redhat-operators-cd8qm" Jan 22 12:20:54 crc kubenswrapper[5120]: I0122 12:20:54.272477 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95d21ef3-45db-4786-bb22-1a4660b26e98-catalog-content\") pod \"redhat-operators-cd8qm\" (UID: \"95d21ef3-45db-4786-bb22-1a4660b26e98\") " pod="openshift-marketplace/redhat-operators-cd8qm" Jan 22 12:20:54 crc kubenswrapper[5120]: I0122 12:20:54.300477 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcw96\" (UniqueName: \"kubernetes.io/projected/95d21ef3-45db-4786-bb22-1a4660b26e98-kube-api-access-xcw96\") pod \"redhat-operators-cd8qm\" (UID: \"95d21ef3-45db-4786-bb22-1a4660b26e98\") " pod="openshift-marketplace/redhat-operators-cd8qm" Jan 22 12:20:54 crc kubenswrapper[5120]: I0122 12:20:54.392976 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cd8qm" Jan 22 12:20:54 crc kubenswrapper[5120]: I0122 12:20:54.654237 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cd8qm"] Jan 22 12:20:54 crc kubenswrapper[5120]: I0122 12:20:54.665700 5120 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 12:20:55 crc kubenswrapper[5120]: I0122 12:20:55.532145 5120 generic.go:358] "Generic (PLEG): container finished" podID="95d21ef3-45db-4786-bb22-1a4660b26e98" containerID="a10cdc4d6af835d0c106444f8cac9e5feaa8b3d4e95f587d91c7532f747904de" exitCode=0 Jan 22 12:20:55 crc kubenswrapper[5120]: I0122 12:20:55.532352 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cd8qm" event={"ID":"95d21ef3-45db-4786-bb22-1a4660b26e98","Type":"ContainerDied","Data":"a10cdc4d6af835d0c106444f8cac9e5feaa8b3d4e95f587d91c7532f747904de"} Jan 22 12:20:55 crc kubenswrapper[5120]: I0122 12:20:55.532661 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cd8qm" event={"ID":"95d21ef3-45db-4786-bb22-1a4660b26e98","Type":"ContainerStarted","Data":"bda931994cbf69f925888b33d7ff244764b67e604b557430bec63fa583126f66"} Jan 22 12:20:55 crc kubenswrapper[5120]: I0122 12:20:55.579583 5120 scope.go:117] "RemoveContainer" containerID="eda097a757f91e81d87c633c172aa5f1c9e7f79ccd5da35f6dbb6ffc692dc58d" Jan 22 12:20:55 crc kubenswrapper[5120]: E0122 12:20:55.579993 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:20:57 crc kubenswrapper[5120]: I0122 12:20:57.556465 5120 generic.go:358] "Generic (PLEG): container finished" podID="95d21ef3-45db-4786-bb22-1a4660b26e98" containerID="a4e04d1c02cb72863eb4774640370a060d163ad85ce10d9ec7a9e22a3b90570f" exitCode=0 Jan 22 12:20:57 crc kubenswrapper[5120]: I0122 12:20:57.556563 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cd8qm" event={"ID":"95d21ef3-45db-4786-bb22-1a4660b26e98","Type":"ContainerDied","Data":"a4e04d1c02cb72863eb4774640370a060d163ad85ce10d9ec7a9e22a3b90570f"} Jan 22 12:20:58 crc kubenswrapper[5120]: I0122 12:20:58.572105 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cd8qm" event={"ID":"95d21ef3-45db-4786-bb22-1a4660b26e98","Type":"ContainerStarted","Data":"4af5eb64c5b39297250506219ad7a7594c169f009b8044fb8e6b1549986001f4"} Jan 22 12:20:58 crc kubenswrapper[5120]: I0122 12:20:58.618276 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cd8qm" podStartSLOduration=3.460111542 podStartE2EDuration="4.618251715s" podCreationTimestamp="2026-01-22 12:20:54 +0000 UTC" firstStartedPulling="2026-01-22 12:20:55.533463527 +0000 UTC m=+1990.277411868" lastFinishedPulling="2026-01-22 12:20:56.6916037 +0000 UTC m=+1991.435552041" observedRunningTime="2026-01-22 12:20:58.599337709 +0000 UTC m=+1993.343286050" watchObservedRunningTime="2026-01-22 12:20:58.618251715 +0000 UTC m=+1993.362200056" Jan 22 12:20:58 crc kubenswrapper[5120]: I0122 12:20:58.814350 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xd9rs"] Jan 22 12:20:58 crc kubenswrapper[5120]: I0122 12:20:58.821728 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xd9rs" Jan 22 12:20:58 crc kubenswrapper[5120]: I0122 12:20:58.837833 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xd9rs"] Jan 22 12:20:58 crc kubenswrapper[5120]: I0122 12:20:58.960947 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r46x\" (UniqueName: \"kubernetes.io/projected/50b831c9-8487-4923-8280-3f8732cc4e62-kube-api-access-9r46x\") pod \"certified-operators-xd9rs\" (UID: \"50b831c9-8487-4923-8280-3f8732cc4e62\") " pod="openshift-marketplace/certified-operators-xd9rs" Jan 22 12:20:58 crc kubenswrapper[5120]: I0122 12:20:58.961479 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50b831c9-8487-4923-8280-3f8732cc4e62-catalog-content\") pod \"certified-operators-xd9rs\" (UID: \"50b831c9-8487-4923-8280-3f8732cc4e62\") " pod="openshift-marketplace/certified-operators-xd9rs" Jan 22 12:20:58 crc kubenswrapper[5120]: I0122 12:20:58.961523 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50b831c9-8487-4923-8280-3f8732cc4e62-utilities\") pod \"certified-operators-xd9rs\" (UID: \"50b831c9-8487-4923-8280-3f8732cc4e62\") " pod="openshift-marketplace/certified-operators-xd9rs" Jan 22 12:20:59 crc kubenswrapper[5120]: I0122 12:20:59.063854 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50b831c9-8487-4923-8280-3f8732cc4e62-catalog-content\") pod \"certified-operators-xd9rs\" (UID: \"50b831c9-8487-4923-8280-3f8732cc4e62\") " pod="openshift-marketplace/certified-operators-xd9rs" Jan 22 12:20:59 crc kubenswrapper[5120]: I0122 12:20:59.064006 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50b831c9-8487-4923-8280-3f8732cc4e62-utilities\") pod \"certified-operators-xd9rs\" (UID: \"50b831c9-8487-4923-8280-3f8732cc4e62\") " pod="openshift-marketplace/certified-operators-xd9rs" Jan 22 12:20:59 crc kubenswrapper[5120]: I0122 12:20:59.064084 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9r46x\" (UniqueName: \"kubernetes.io/projected/50b831c9-8487-4923-8280-3f8732cc4e62-kube-api-access-9r46x\") pod \"certified-operators-xd9rs\" (UID: \"50b831c9-8487-4923-8280-3f8732cc4e62\") " pod="openshift-marketplace/certified-operators-xd9rs" Jan 22 12:20:59 crc kubenswrapper[5120]: I0122 12:20:59.064639 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50b831c9-8487-4923-8280-3f8732cc4e62-catalog-content\") pod \"certified-operators-xd9rs\" (UID: \"50b831c9-8487-4923-8280-3f8732cc4e62\") " pod="openshift-marketplace/certified-operators-xd9rs" Jan 22 12:20:59 crc kubenswrapper[5120]: I0122 12:20:59.064746 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50b831c9-8487-4923-8280-3f8732cc4e62-utilities\") pod \"certified-operators-xd9rs\" (UID: \"50b831c9-8487-4923-8280-3f8732cc4e62\") " pod="openshift-marketplace/certified-operators-xd9rs" Jan 22 12:20:59 crc kubenswrapper[5120]: I0122 12:20:59.099175 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9r46x\" (UniqueName: \"kubernetes.io/projected/50b831c9-8487-4923-8280-3f8732cc4e62-kube-api-access-9r46x\") pod \"certified-operators-xd9rs\" (UID: \"50b831c9-8487-4923-8280-3f8732cc4e62\") " pod="openshift-marketplace/certified-operators-xd9rs" Jan 22 12:20:59 crc kubenswrapper[5120]: I0122 12:20:59.151616 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xd9rs" Jan 22 12:20:59 crc kubenswrapper[5120]: I0122 12:20:59.495110 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xd9rs"] Jan 22 12:20:59 crc kubenswrapper[5120]: I0122 12:20:59.590375 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xd9rs" event={"ID":"50b831c9-8487-4923-8280-3f8732cc4e62","Type":"ContainerStarted","Data":"0416a99eafcad5b84de8e060b9ad9afbc806dd6a6d5f802cf815e9fb58c4057a"} Jan 22 12:21:00 crc kubenswrapper[5120]: I0122 12:21:00.603776 5120 generic.go:358] "Generic (PLEG): container finished" podID="50b831c9-8487-4923-8280-3f8732cc4e62" containerID="9b5793ff8582e2824757b1af0999c2c9b1a88a720de9663ab2a55a1bb122210c" exitCode=0 Jan 22 12:21:00 crc kubenswrapper[5120]: I0122 12:21:00.603895 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xd9rs" event={"ID":"50b831c9-8487-4923-8280-3f8732cc4e62","Type":"ContainerDied","Data":"9b5793ff8582e2824757b1af0999c2c9b1a88a720de9663ab2a55a1bb122210c"} Jan 22 12:21:01 crc kubenswrapper[5120]: I0122 12:21:01.616615 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xd9rs" event={"ID":"50b831c9-8487-4923-8280-3f8732cc4e62","Type":"ContainerStarted","Data":"a958e740e19c76adb27f5b484cc01cdd94266eacf1fe26b44db60a4fd8a0967d"} Jan 22 12:21:01 crc kubenswrapper[5120]: I0122 12:21:01.769179 5120 scope.go:117] "RemoveContainer" containerID="21b98295bffce8d00861339ce4655dd1e74538d2d7b8c008a2e3013d23d808e0" Jan 22 12:21:04 crc kubenswrapper[5120]: I0122 12:21:04.394118 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cd8qm" Jan 22 12:21:04 crc kubenswrapper[5120]: I0122 12:21:04.394591 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-cd8qm" Jan 22 12:21:04 crc kubenswrapper[5120]: I0122 12:21:04.460695 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cd8qm" Jan 22 12:21:04 crc kubenswrapper[5120]: I0122 12:21:04.647724 5120 generic.go:358] "Generic (PLEG): container finished" podID="50b831c9-8487-4923-8280-3f8732cc4e62" containerID="a958e740e19c76adb27f5b484cc01cdd94266eacf1fe26b44db60a4fd8a0967d" exitCode=0 Jan 22 12:21:04 crc kubenswrapper[5120]: I0122 12:21:04.648207 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xd9rs" event={"ID":"50b831c9-8487-4923-8280-3f8732cc4e62","Type":"ContainerDied","Data":"a958e740e19c76adb27f5b484cc01cdd94266eacf1fe26b44db60a4fd8a0967d"} Jan 22 12:21:04 crc kubenswrapper[5120]: I0122 12:21:04.698113 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cd8qm" Jan 22 12:21:06 crc kubenswrapper[5120]: I0122 12:21:06.611522 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cd8qm"] Jan 22 12:21:06 crc kubenswrapper[5120]: I0122 12:21:06.674882 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cd8qm" podUID="95d21ef3-45db-4786-bb22-1a4660b26e98" containerName="registry-server" containerID="cri-o://4af5eb64c5b39297250506219ad7a7594c169f009b8044fb8e6b1549986001f4" gracePeriod=2 Jan 22 12:21:06 crc kubenswrapper[5120]: I0122 12:21:06.675406 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xd9rs" event={"ID":"50b831c9-8487-4923-8280-3f8732cc4e62","Type":"ContainerStarted","Data":"f501a8db3eb1e57f2fd0469475551120fd166c35ffac9fb09f48be064f772806"} Jan 22 12:21:06 crc kubenswrapper[5120]: I0122 12:21:06.701784 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xd9rs" podStartSLOduration=8.080341133 podStartE2EDuration="8.701760171s" podCreationTimestamp="2026-01-22 12:20:58 +0000 UTC" firstStartedPulling="2026-01-22 12:21:00.605068499 +0000 UTC m=+1995.349016850" lastFinishedPulling="2026-01-22 12:21:01.226487547 +0000 UTC m=+1995.970435888" observedRunningTime="2026-01-22 12:21:06.701553307 +0000 UTC m=+2001.445501648" watchObservedRunningTime="2026-01-22 12:21:06.701760171 +0000 UTC m=+2001.445708512" Jan 22 12:21:07 crc kubenswrapper[5120]: I0122 12:21:07.123595 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cd8qm" Jan 22 12:21:07 crc kubenswrapper[5120]: I0122 12:21:07.225696 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95d21ef3-45db-4786-bb22-1a4660b26e98-catalog-content\") pod \"95d21ef3-45db-4786-bb22-1a4660b26e98\" (UID: \"95d21ef3-45db-4786-bb22-1a4660b26e98\") " Jan 22 12:21:07 crc kubenswrapper[5120]: I0122 12:21:07.226268 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95d21ef3-45db-4786-bb22-1a4660b26e98-utilities\") pod \"95d21ef3-45db-4786-bb22-1a4660b26e98\" (UID: \"95d21ef3-45db-4786-bb22-1a4660b26e98\") " Jan 22 12:21:07 crc kubenswrapper[5120]: I0122 12:21:07.226506 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcw96\" (UniqueName: \"kubernetes.io/projected/95d21ef3-45db-4786-bb22-1a4660b26e98-kube-api-access-xcw96\") pod \"95d21ef3-45db-4786-bb22-1a4660b26e98\" (UID: \"95d21ef3-45db-4786-bb22-1a4660b26e98\") " Jan 22 12:21:07 crc kubenswrapper[5120]: I0122 12:21:07.227378 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95d21ef3-45db-4786-bb22-1a4660b26e98-utilities" (OuterVolumeSpecName: "utilities") pod "95d21ef3-45db-4786-bb22-1a4660b26e98" (UID: "95d21ef3-45db-4786-bb22-1a4660b26e98"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:21:07 crc kubenswrapper[5120]: I0122 12:21:07.234883 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95d21ef3-45db-4786-bb22-1a4660b26e98-kube-api-access-xcw96" (OuterVolumeSpecName: "kube-api-access-xcw96") pod "95d21ef3-45db-4786-bb22-1a4660b26e98" (UID: "95d21ef3-45db-4786-bb22-1a4660b26e98"). InnerVolumeSpecName "kube-api-access-xcw96". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:21:07 crc kubenswrapper[5120]: I0122 12:21:07.330544 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95d21ef3-45db-4786-bb22-1a4660b26e98-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 12:21:07 crc kubenswrapper[5120]: I0122 12:21:07.330615 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xcw96\" (UniqueName: \"kubernetes.io/projected/95d21ef3-45db-4786-bb22-1a4660b26e98-kube-api-access-xcw96\") on node \"crc\" DevicePath \"\"" Jan 22 12:21:07 crc kubenswrapper[5120]: I0122 12:21:07.343334 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95d21ef3-45db-4786-bb22-1a4660b26e98-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "95d21ef3-45db-4786-bb22-1a4660b26e98" (UID: "95d21ef3-45db-4786-bb22-1a4660b26e98"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:21:07 crc kubenswrapper[5120]: I0122 12:21:07.432197 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95d21ef3-45db-4786-bb22-1a4660b26e98-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 12:21:07 crc kubenswrapper[5120]: I0122 12:21:07.694176 5120 generic.go:358] "Generic (PLEG): container finished" podID="95d21ef3-45db-4786-bb22-1a4660b26e98" containerID="4af5eb64c5b39297250506219ad7a7594c169f009b8044fb8e6b1549986001f4" exitCode=0 Jan 22 12:21:07 crc kubenswrapper[5120]: I0122 12:21:07.694376 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cd8qm" Jan 22 12:21:07 crc kubenswrapper[5120]: I0122 12:21:07.694358 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cd8qm" event={"ID":"95d21ef3-45db-4786-bb22-1a4660b26e98","Type":"ContainerDied","Data":"4af5eb64c5b39297250506219ad7a7594c169f009b8044fb8e6b1549986001f4"} Jan 22 12:21:07 crc kubenswrapper[5120]: I0122 12:21:07.694482 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cd8qm" event={"ID":"95d21ef3-45db-4786-bb22-1a4660b26e98","Type":"ContainerDied","Data":"bda931994cbf69f925888b33d7ff244764b67e604b557430bec63fa583126f66"} Jan 22 12:21:07 crc kubenswrapper[5120]: I0122 12:21:07.694519 5120 scope.go:117] "RemoveContainer" containerID="4af5eb64c5b39297250506219ad7a7594c169f009b8044fb8e6b1549986001f4" Jan 22 12:21:07 crc kubenswrapper[5120]: I0122 12:21:07.728895 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cd8qm"] Jan 22 12:21:07 crc kubenswrapper[5120]: I0122 12:21:07.732472 5120 scope.go:117] "RemoveContainer" containerID="a4e04d1c02cb72863eb4774640370a060d163ad85ce10d9ec7a9e22a3b90570f" Jan 22 12:21:07 crc kubenswrapper[5120]: I0122 12:21:07.740189 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cd8qm"] Jan 22 12:21:07 crc kubenswrapper[5120]: I0122 12:21:07.756020 5120 scope.go:117] "RemoveContainer" containerID="a10cdc4d6af835d0c106444f8cac9e5feaa8b3d4e95f587d91c7532f747904de" Jan 22 12:21:07 crc kubenswrapper[5120]: I0122 12:21:07.805893 5120 scope.go:117] "RemoveContainer" containerID="4af5eb64c5b39297250506219ad7a7594c169f009b8044fb8e6b1549986001f4" Jan 22 12:21:07 crc kubenswrapper[5120]: E0122 12:21:07.806458 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4af5eb64c5b39297250506219ad7a7594c169f009b8044fb8e6b1549986001f4\": container with ID starting with 4af5eb64c5b39297250506219ad7a7594c169f009b8044fb8e6b1549986001f4 not found: ID does not exist" containerID="4af5eb64c5b39297250506219ad7a7594c169f009b8044fb8e6b1549986001f4" Jan 22 12:21:07 crc kubenswrapper[5120]: I0122 12:21:07.806497 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4af5eb64c5b39297250506219ad7a7594c169f009b8044fb8e6b1549986001f4"} err="failed to get container status \"4af5eb64c5b39297250506219ad7a7594c169f009b8044fb8e6b1549986001f4\": rpc error: code = NotFound desc = could not find container \"4af5eb64c5b39297250506219ad7a7594c169f009b8044fb8e6b1549986001f4\": container with ID starting with 4af5eb64c5b39297250506219ad7a7594c169f009b8044fb8e6b1549986001f4 not found: ID does not exist" Jan 22 12:21:07 crc kubenswrapper[5120]: I0122 12:21:07.806522 5120 scope.go:117] "RemoveContainer" containerID="a4e04d1c02cb72863eb4774640370a060d163ad85ce10d9ec7a9e22a3b90570f" Jan 22 12:21:07 crc kubenswrapper[5120]: E0122 12:21:07.806780 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4e04d1c02cb72863eb4774640370a060d163ad85ce10d9ec7a9e22a3b90570f\": container with ID starting with a4e04d1c02cb72863eb4774640370a060d163ad85ce10d9ec7a9e22a3b90570f not found: ID does not exist" containerID="a4e04d1c02cb72863eb4774640370a060d163ad85ce10d9ec7a9e22a3b90570f" Jan 22 12:21:07 crc kubenswrapper[5120]: I0122 12:21:07.806805 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4e04d1c02cb72863eb4774640370a060d163ad85ce10d9ec7a9e22a3b90570f"} err="failed to get container status \"a4e04d1c02cb72863eb4774640370a060d163ad85ce10d9ec7a9e22a3b90570f\": rpc error: code = NotFound desc = could not find container \"a4e04d1c02cb72863eb4774640370a060d163ad85ce10d9ec7a9e22a3b90570f\": container with ID starting with a4e04d1c02cb72863eb4774640370a060d163ad85ce10d9ec7a9e22a3b90570f not found: ID does not exist" Jan 22 12:21:07 crc kubenswrapper[5120]: I0122 12:21:07.806821 5120 scope.go:117] "RemoveContainer" containerID="a10cdc4d6af835d0c106444f8cac9e5feaa8b3d4e95f587d91c7532f747904de" Jan 22 12:21:07 crc kubenswrapper[5120]: E0122 12:21:07.807091 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a10cdc4d6af835d0c106444f8cac9e5feaa8b3d4e95f587d91c7532f747904de\": container with ID starting with a10cdc4d6af835d0c106444f8cac9e5feaa8b3d4e95f587d91c7532f747904de not found: ID does not exist" containerID="a10cdc4d6af835d0c106444f8cac9e5feaa8b3d4e95f587d91c7532f747904de" Jan 22 12:21:07 crc kubenswrapper[5120]: I0122 12:21:07.807116 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a10cdc4d6af835d0c106444f8cac9e5feaa8b3d4e95f587d91c7532f747904de"} err="failed to get container status \"a10cdc4d6af835d0c106444f8cac9e5feaa8b3d4e95f587d91c7532f747904de\": rpc error: code = NotFound desc = could not find container \"a10cdc4d6af835d0c106444f8cac9e5feaa8b3d4e95f587d91c7532f747904de\": container with ID starting with a10cdc4d6af835d0c106444f8cac9e5feaa8b3d4e95f587d91c7532f747904de not found: ID does not exist" Jan 22 12:21:08 crc kubenswrapper[5120]: I0122 12:21:08.584520 5120 scope.go:117] "RemoveContainer" containerID="eda097a757f91e81d87c633c172aa5f1c9e7f79ccd5da35f6dbb6ffc692dc58d" Jan 22 12:21:08 crc kubenswrapper[5120]: E0122 12:21:08.585301 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:21:09 crc kubenswrapper[5120]: I0122 12:21:09.152115 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-xd9rs" Jan 22 12:21:09 crc kubenswrapper[5120]: I0122 12:21:09.153391 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xd9rs" Jan 22 12:21:09 crc kubenswrapper[5120]: I0122 12:21:09.198443 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xd9rs" Jan 22 12:21:09 crc kubenswrapper[5120]: I0122 12:21:09.583759 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95d21ef3-45db-4786-bb22-1a4660b26e98" path="/var/lib/kubelet/pods/95d21ef3-45db-4786-bb22-1a4660b26e98/volumes" Jan 22 12:21:19 crc kubenswrapper[5120]: I0122 12:21:19.775070 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xd9rs" Jan 22 12:21:19 crc kubenswrapper[5120]: I0122 12:21:19.833001 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xd9rs"] Jan 22 12:21:19 crc kubenswrapper[5120]: I0122 12:21:19.833361 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xd9rs" podUID="50b831c9-8487-4923-8280-3f8732cc4e62" containerName="registry-server" containerID="cri-o://f501a8db3eb1e57f2fd0469475551120fd166c35ffac9fb09f48be064f772806" gracePeriod=2 Jan 22 12:21:20 crc kubenswrapper[5120]: I0122 12:21:20.794311 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xd9rs" Jan 22 12:21:20 crc kubenswrapper[5120]: I0122 12:21:20.836008 5120 generic.go:358] "Generic (PLEG): container finished" podID="50b831c9-8487-4923-8280-3f8732cc4e62" containerID="f501a8db3eb1e57f2fd0469475551120fd166c35ffac9fb09f48be064f772806" exitCode=0 Jan 22 12:21:20 crc kubenswrapper[5120]: I0122 12:21:20.836311 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xd9rs" event={"ID":"50b831c9-8487-4923-8280-3f8732cc4e62","Type":"ContainerDied","Data":"f501a8db3eb1e57f2fd0469475551120fd166c35ffac9fb09f48be064f772806"} Jan 22 12:21:20 crc kubenswrapper[5120]: I0122 12:21:20.836345 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xd9rs" event={"ID":"50b831c9-8487-4923-8280-3f8732cc4e62","Type":"ContainerDied","Data":"0416a99eafcad5b84de8e060b9ad9afbc806dd6a6d5f802cf815e9fb58c4057a"} Jan 22 12:21:20 crc kubenswrapper[5120]: I0122 12:21:20.836363 5120 scope.go:117] "RemoveContainer" containerID="f501a8db3eb1e57f2fd0469475551120fd166c35ffac9fb09f48be064f772806" Jan 22 12:21:20 crc kubenswrapper[5120]: I0122 12:21:20.836382 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xd9rs" Jan 22 12:21:20 crc kubenswrapper[5120]: I0122 12:21:20.859975 5120 scope.go:117] "RemoveContainer" containerID="a958e740e19c76adb27f5b484cc01cdd94266eacf1fe26b44db60a4fd8a0967d" Jan 22 12:21:20 crc kubenswrapper[5120]: I0122 12:21:20.889629 5120 scope.go:117] "RemoveContainer" containerID="9b5793ff8582e2824757b1af0999c2c9b1a88a720de9663ab2a55a1bb122210c" Jan 22 12:21:20 crc kubenswrapper[5120]: I0122 12:21:20.918918 5120 scope.go:117] "RemoveContainer" containerID="f501a8db3eb1e57f2fd0469475551120fd166c35ffac9fb09f48be064f772806" Jan 22 12:21:20 crc kubenswrapper[5120]: E0122 12:21:20.919431 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f501a8db3eb1e57f2fd0469475551120fd166c35ffac9fb09f48be064f772806\": container with ID starting with f501a8db3eb1e57f2fd0469475551120fd166c35ffac9fb09f48be064f772806 not found: ID does not exist" containerID="f501a8db3eb1e57f2fd0469475551120fd166c35ffac9fb09f48be064f772806" Jan 22 12:21:20 crc kubenswrapper[5120]: I0122 12:21:20.919485 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f501a8db3eb1e57f2fd0469475551120fd166c35ffac9fb09f48be064f772806"} err="failed to get container status \"f501a8db3eb1e57f2fd0469475551120fd166c35ffac9fb09f48be064f772806\": rpc error: code = NotFound desc = could not find container \"f501a8db3eb1e57f2fd0469475551120fd166c35ffac9fb09f48be064f772806\": container with ID starting with f501a8db3eb1e57f2fd0469475551120fd166c35ffac9fb09f48be064f772806 not found: ID does not exist" Jan 22 12:21:20 crc kubenswrapper[5120]: I0122 12:21:20.919512 5120 scope.go:117] "RemoveContainer" containerID="a958e740e19c76adb27f5b484cc01cdd94266eacf1fe26b44db60a4fd8a0967d" Jan 22 12:21:20 crc kubenswrapper[5120]: E0122 12:21:20.919809 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a958e740e19c76adb27f5b484cc01cdd94266eacf1fe26b44db60a4fd8a0967d\": container with ID starting with a958e740e19c76adb27f5b484cc01cdd94266eacf1fe26b44db60a4fd8a0967d not found: ID does not exist" containerID="a958e740e19c76adb27f5b484cc01cdd94266eacf1fe26b44db60a4fd8a0967d" Jan 22 12:21:20 crc kubenswrapper[5120]: I0122 12:21:20.919828 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a958e740e19c76adb27f5b484cc01cdd94266eacf1fe26b44db60a4fd8a0967d"} err="failed to get container status \"a958e740e19c76adb27f5b484cc01cdd94266eacf1fe26b44db60a4fd8a0967d\": rpc error: code = NotFound desc = could not find container \"a958e740e19c76adb27f5b484cc01cdd94266eacf1fe26b44db60a4fd8a0967d\": container with ID starting with a958e740e19c76adb27f5b484cc01cdd94266eacf1fe26b44db60a4fd8a0967d not found: ID does not exist" Jan 22 12:21:20 crc kubenswrapper[5120]: I0122 12:21:20.919841 5120 scope.go:117] "RemoveContainer" containerID="9b5793ff8582e2824757b1af0999c2c9b1a88a720de9663ab2a55a1bb122210c" Jan 22 12:21:20 crc kubenswrapper[5120]: E0122 12:21:20.920174 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b5793ff8582e2824757b1af0999c2c9b1a88a720de9663ab2a55a1bb122210c\": container with ID starting with 9b5793ff8582e2824757b1af0999c2c9b1a88a720de9663ab2a55a1bb122210c not found: ID does not exist" containerID="9b5793ff8582e2824757b1af0999c2c9b1a88a720de9663ab2a55a1bb122210c" Jan 22 12:21:20 crc kubenswrapper[5120]: I0122 12:21:20.920211 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b5793ff8582e2824757b1af0999c2c9b1a88a720de9663ab2a55a1bb122210c"} err="failed to get container status \"9b5793ff8582e2824757b1af0999c2c9b1a88a720de9663ab2a55a1bb122210c\": rpc error: code = NotFound desc = could not find container \"9b5793ff8582e2824757b1af0999c2c9b1a88a720de9663ab2a55a1bb122210c\": container with ID starting with 9b5793ff8582e2824757b1af0999c2c9b1a88a720de9663ab2a55a1bb122210c not found: ID does not exist" Jan 22 12:21:20 crc kubenswrapper[5120]: I0122 12:21:20.937222 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50b831c9-8487-4923-8280-3f8732cc4e62-utilities\") pod \"50b831c9-8487-4923-8280-3f8732cc4e62\" (UID: \"50b831c9-8487-4923-8280-3f8732cc4e62\") " Jan 22 12:21:20 crc kubenswrapper[5120]: I0122 12:21:20.937393 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9r46x\" (UniqueName: \"kubernetes.io/projected/50b831c9-8487-4923-8280-3f8732cc4e62-kube-api-access-9r46x\") pod \"50b831c9-8487-4923-8280-3f8732cc4e62\" (UID: \"50b831c9-8487-4923-8280-3f8732cc4e62\") " Jan 22 12:21:20 crc kubenswrapper[5120]: I0122 12:21:20.938627 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50b831c9-8487-4923-8280-3f8732cc4e62-utilities" (OuterVolumeSpecName: "utilities") pod "50b831c9-8487-4923-8280-3f8732cc4e62" (UID: "50b831c9-8487-4923-8280-3f8732cc4e62"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:21:20 crc kubenswrapper[5120]: I0122 12:21:20.938893 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50b831c9-8487-4923-8280-3f8732cc4e62-catalog-content\") pod \"50b831c9-8487-4923-8280-3f8732cc4e62\" (UID: \"50b831c9-8487-4923-8280-3f8732cc4e62\") " Jan 22 12:21:20 crc kubenswrapper[5120]: I0122 12:21:20.939412 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50b831c9-8487-4923-8280-3f8732cc4e62-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 12:21:20 crc kubenswrapper[5120]: I0122 12:21:20.946244 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50b831c9-8487-4923-8280-3f8732cc4e62-kube-api-access-9r46x" (OuterVolumeSpecName: "kube-api-access-9r46x") pod "50b831c9-8487-4923-8280-3f8732cc4e62" (UID: "50b831c9-8487-4923-8280-3f8732cc4e62"). InnerVolumeSpecName "kube-api-access-9r46x". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:21:20 crc kubenswrapper[5120]: I0122 12:21:20.975822 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50b831c9-8487-4923-8280-3f8732cc4e62-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "50b831c9-8487-4923-8280-3f8732cc4e62" (UID: "50b831c9-8487-4923-8280-3f8732cc4e62"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:21:21 crc kubenswrapper[5120]: I0122 12:21:21.041654 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9r46x\" (UniqueName: \"kubernetes.io/projected/50b831c9-8487-4923-8280-3f8732cc4e62-kube-api-access-9r46x\") on node \"crc\" DevicePath \"\"" Jan 22 12:21:21 crc kubenswrapper[5120]: I0122 12:21:21.041722 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50b831c9-8487-4923-8280-3f8732cc4e62-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 12:21:21 crc kubenswrapper[5120]: I0122 12:21:21.190402 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xd9rs"] Jan 22 12:21:21 crc kubenswrapper[5120]: I0122 12:21:21.202543 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xd9rs"] Jan 22 12:21:21 crc kubenswrapper[5120]: I0122 12:21:21.580272 5120 scope.go:117] "RemoveContainer" containerID="eda097a757f91e81d87c633c172aa5f1c9e7f79ccd5da35f6dbb6ffc692dc58d" Jan 22 12:21:21 crc kubenswrapper[5120]: E0122 12:21:21.580757 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:21:21 crc kubenswrapper[5120]: I0122 12:21:21.586291 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50b831c9-8487-4923-8280-3f8732cc4e62" path="/var/lib/kubelet/pods/50b831c9-8487-4923-8280-3f8732cc4e62/volumes" Jan 22 12:21:32 crc kubenswrapper[5120]: I0122 12:21:32.572563 5120 scope.go:117] "RemoveContainer" containerID="eda097a757f91e81d87c633c172aa5f1c9e7f79ccd5da35f6dbb6ffc692dc58d" Jan 22 12:21:32 crc kubenswrapper[5120]: E0122 12:21:32.573720 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:21:44 crc kubenswrapper[5120]: I0122 12:21:44.573231 5120 scope.go:117] "RemoveContainer" containerID="eda097a757f91e81d87c633c172aa5f1c9e7f79ccd5da35f6dbb6ffc692dc58d" Jan 22 12:21:44 crc kubenswrapper[5120]: E0122 12:21:44.574822 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:21:58 crc kubenswrapper[5120]: I0122 12:21:58.571805 5120 scope.go:117] "RemoveContainer" containerID="eda097a757f91e81d87c633c172aa5f1c9e7f79ccd5da35f6dbb6ffc692dc58d" Jan 22 12:21:58 crc kubenswrapper[5120]: E0122 12:21:58.574037 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:22:00 crc kubenswrapper[5120]: I0122 12:22:00.147661 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29484742-4b4pf"] Jan 22 12:22:00 crc kubenswrapper[5120]: I0122 12:22:00.150804 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="50b831c9-8487-4923-8280-3f8732cc4e62" containerName="extract-content" Jan 22 12:22:00 crc kubenswrapper[5120]: I0122 12:22:00.150897 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="50b831c9-8487-4923-8280-3f8732cc4e62" containerName="extract-content" Jan 22 12:22:00 crc kubenswrapper[5120]: I0122 12:22:00.150971 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="50b831c9-8487-4923-8280-3f8732cc4e62" containerName="extract-utilities" Jan 22 12:22:00 crc kubenswrapper[5120]: I0122 12:22:00.151064 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="50b831c9-8487-4923-8280-3f8732cc4e62" containerName="extract-utilities" Jan 22 12:22:00 crc kubenswrapper[5120]: I0122 12:22:00.151233 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="95d21ef3-45db-4786-bb22-1a4660b26e98" containerName="extract-content" Jan 22 12:22:00 crc kubenswrapper[5120]: I0122 12:22:00.151295 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="95d21ef3-45db-4786-bb22-1a4660b26e98" containerName="extract-content" Jan 22 12:22:00 crc kubenswrapper[5120]: I0122 12:22:00.151376 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="50b831c9-8487-4923-8280-3f8732cc4e62" containerName="registry-server" Jan 22 12:22:00 crc kubenswrapper[5120]: I0122 12:22:00.151445 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="50b831c9-8487-4923-8280-3f8732cc4e62" containerName="registry-server" Jan 22 12:22:00 crc kubenswrapper[5120]: I0122 12:22:00.151511 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="95d21ef3-45db-4786-bb22-1a4660b26e98" containerName="extract-utilities" Jan 22 12:22:00 crc kubenswrapper[5120]: I0122 12:22:00.151580 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="95d21ef3-45db-4786-bb22-1a4660b26e98" containerName="extract-utilities" Jan 22 12:22:00 crc kubenswrapper[5120]: I0122 12:22:00.151655 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="95d21ef3-45db-4786-bb22-1a4660b26e98" containerName="registry-server" Jan 22 12:22:00 crc kubenswrapper[5120]: I0122 12:22:00.151710 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="95d21ef3-45db-4786-bb22-1a4660b26e98" containerName="registry-server" Jan 22 12:22:00 crc kubenswrapper[5120]: I0122 12:22:00.151948 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="95d21ef3-45db-4786-bb22-1a4660b26e98" containerName="registry-server" Jan 22 12:22:00 crc kubenswrapper[5120]: I0122 12:22:00.152066 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="50b831c9-8487-4923-8280-3f8732cc4e62" containerName="registry-server" Jan 22 12:22:00 crc kubenswrapper[5120]: I0122 12:22:00.164828 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484742-4b4pf" Jan 22 12:22:00 crc kubenswrapper[5120]: I0122 12:22:00.170458 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 22 12:22:00 crc kubenswrapper[5120]: I0122 12:22:00.171078 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g2chw\"" Jan 22 12:22:00 crc kubenswrapper[5120]: I0122 12:22:00.171171 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 22 12:22:00 crc kubenswrapper[5120]: I0122 12:22:00.174121 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484742-4b4pf"] Jan 22 12:22:00 crc kubenswrapper[5120]: I0122 12:22:00.260543 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trbpr\" (UniqueName: \"kubernetes.io/projected/4bfb1f97-ca93-4138-99d0-06fcb09ba8f5-kube-api-access-trbpr\") pod \"auto-csr-approver-29484742-4b4pf\" (UID: \"4bfb1f97-ca93-4138-99d0-06fcb09ba8f5\") " pod="openshift-infra/auto-csr-approver-29484742-4b4pf" Jan 22 12:22:00 crc kubenswrapper[5120]: I0122 12:22:00.365891 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-trbpr\" (UniqueName: \"kubernetes.io/projected/4bfb1f97-ca93-4138-99d0-06fcb09ba8f5-kube-api-access-trbpr\") pod \"auto-csr-approver-29484742-4b4pf\" (UID: \"4bfb1f97-ca93-4138-99d0-06fcb09ba8f5\") " pod="openshift-infra/auto-csr-approver-29484742-4b4pf" Jan 22 12:22:00 crc kubenswrapper[5120]: I0122 12:22:00.402337 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-trbpr\" (UniqueName: \"kubernetes.io/projected/4bfb1f97-ca93-4138-99d0-06fcb09ba8f5-kube-api-access-trbpr\") pod \"auto-csr-approver-29484742-4b4pf\" (UID: \"4bfb1f97-ca93-4138-99d0-06fcb09ba8f5\") " pod="openshift-infra/auto-csr-approver-29484742-4b4pf" Jan 22 12:22:00 crc kubenswrapper[5120]: I0122 12:22:00.499338 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484742-4b4pf" Jan 22 12:22:00 crc kubenswrapper[5120]: I0122 12:22:00.748925 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484742-4b4pf"] Jan 22 12:22:01 crc kubenswrapper[5120]: I0122 12:22:01.423073 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484742-4b4pf" event={"ID":"4bfb1f97-ca93-4138-99d0-06fcb09ba8f5","Type":"ContainerStarted","Data":"1ff3314b07795c055a18c27ee0391aa973a5712bd59e9d8ea772ee9b7d1566e6"} Jan 22 12:22:03 crc kubenswrapper[5120]: I0122 12:22:03.445315 5120 generic.go:358] "Generic (PLEG): container finished" podID="4bfb1f97-ca93-4138-99d0-06fcb09ba8f5" containerID="db11fbf4c05e98a727f7dde0c0bea3704c2e71605b0732b118ce9ceec98d8a9e" exitCode=0 Jan 22 12:22:03 crc kubenswrapper[5120]: I0122 12:22:03.445400 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484742-4b4pf" event={"ID":"4bfb1f97-ca93-4138-99d0-06fcb09ba8f5","Type":"ContainerDied","Data":"db11fbf4c05e98a727f7dde0c0bea3704c2e71605b0732b118ce9ceec98d8a9e"} Jan 22 12:22:04 crc kubenswrapper[5120]: I0122 12:22:04.732404 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484742-4b4pf" Jan 22 12:22:04 crc kubenswrapper[5120]: I0122 12:22:04.750474 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trbpr\" (UniqueName: \"kubernetes.io/projected/4bfb1f97-ca93-4138-99d0-06fcb09ba8f5-kube-api-access-trbpr\") pod \"4bfb1f97-ca93-4138-99d0-06fcb09ba8f5\" (UID: \"4bfb1f97-ca93-4138-99d0-06fcb09ba8f5\") " Jan 22 12:22:04 crc kubenswrapper[5120]: I0122 12:22:04.763515 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bfb1f97-ca93-4138-99d0-06fcb09ba8f5-kube-api-access-trbpr" (OuterVolumeSpecName: "kube-api-access-trbpr") pod "4bfb1f97-ca93-4138-99d0-06fcb09ba8f5" (UID: "4bfb1f97-ca93-4138-99d0-06fcb09ba8f5"). InnerVolumeSpecName "kube-api-access-trbpr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:22:04 crc kubenswrapper[5120]: I0122 12:22:04.852236 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-trbpr\" (UniqueName: \"kubernetes.io/projected/4bfb1f97-ca93-4138-99d0-06fcb09ba8f5-kube-api-access-trbpr\") on node \"crc\" DevicePath \"\"" Jan 22 12:22:05 crc kubenswrapper[5120]: I0122 12:22:05.470260 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484742-4b4pf" event={"ID":"4bfb1f97-ca93-4138-99d0-06fcb09ba8f5","Type":"ContainerDied","Data":"1ff3314b07795c055a18c27ee0391aa973a5712bd59e9d8ea772ee9b7d1566e6"} Jan 22 12:22:05 crc kubenswrapper[5120]: I0122 12:22:05.470396 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ff3314b07795c055a18c27ee0391aa973a5712bd59e9d8ea772ee9b7d1566e6" Jan 22 12:22:05 crc kubenswrapper[5120]: I0122 12:22:05.470303 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484742-4b4pf" Jan 22 12:22:05 crc kubenswrapper[5120]: I0122 12:22:05.817647 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29484736-5pvc5"] Jan 22 12:22:05 crc kubenswrapper[5120]: I0122 12:22:05.825896 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29484736-5pvc5"] Jan 22 12:22:07 crc kubenswrapper[5120]: I0122 12:22:07.584566 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5256dd8c-d5c9-4b8f-8e6e-6fa5175741ca" path="/var/lib/kubelet/pods/5256dd8c-d5c9-4b8f-8e6e-6fa5175741ca/volumes" Jan 22 12:22:12 crc kubenswrapper[5120]: I0122 12:22:12.573064 5120 scope.go:117] "RemoveContainer" containerID="eda097a757f91e81d87c633c172aa5f1c9e7f79ccd5da35f6dbb6ffc692dc58d" Jan 22 12:22:13 crc kubenswrapper[5120]: I0122 12:22:13.563214 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" event={"ID":"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9","Type":"ContainerStarted","Data":"e853360e55cf5a442f891e5c045632b5fe91a8840293356f1cb5a89ddebe318b"} Jan 22 12:22:51 crc kubenswrapper[5120]: I0122 12:22:51.490220 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4lzht_67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087/kube-multus/0.log" Jan 22 12:22:51 crc kubenswrapper[5120]: I0122 12:22:51.490247 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4lzht_67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087/kube-multus/0.log" Jan 22 12:22:51 crc kubenswrapper[5120]: I0122 12:22:51.506531 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 22 12:22:51 crc kubenswrapper[5120]: I0122 12:22:51.506537 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 22 12:23:02 crc kubenswrapper[5120]: I0122 12:23:02.000605 5120 scope.go:117] "RemoveContainer" containerID="7dd5e09283dddb7bf8d7833ea438fcac480d32b32def3f4fc53d049422374e23" Jan 22 12:23:08 crc kubenswrapper[5120]: I0122 12:23:08.669042 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-m7ncp"] Jan 22 12:23:08 crc kubenswrapper[5120]: I0122 12:23:08.670632 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4bfb1f97-ca93-4138-99d0-06fcb09ba8f5" containerName="oc" Jan 22 12:23:08 crc kubenswrapper[5120]: I0122 12:23:08.670653 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bfb1f97-ca93-4138-99d0-06fcb09ba8f5" containerName="oc" Jan 22 12:23:08 crc kubenswrapper[5120]: I0122 12:23:08.670873 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="4bfb1f97-ca93-4138-99d0-06fcb09ba8f5" containerName="oc" Jan 22 12:23:08 crc kubenswrapper[5120]: I0122 12:23:08.737155 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m7ncp"] Jan 22 12:23:08 crc kubenswrapper[5120]: I0122 12:23:08.737421 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m7ncp" Jan 22 12:23:08 crc kubenswrapper[5120]: I0122 12:23:08.907365 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/181146be-5e90-40cd-bd8f-63dd9bf20dc7-catalog-content\") pod \"community-operators-m7ncp\" (UID: \"181146be-5e90-40cd-bd8f-63dd9bf20dc7\") " pod="openshift-marketplace/community-operators-m7ncp" Jan 22 12:23:08 crc kubenswrapper[5120]: I0122 12:23:08.907462 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgpt6\" (UniqueName: \"kubernetes.io/projected/181146be-5e90-40cd-bd8f-63dd9bf20dc7-kube-api-access-vgpt6\") pod \"community-operators-m7ncp\" (UID: \"181146be-5e90-40cd-bd8f-63dd9bf20dc7\") " pod="openshift-marketplace/community-operators-m7ncp" Jan 22 12:23:08 crc kubenswrapper[5120]: I0122 12:23:08.907885 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/181146be-5e90-40cd-bd8f-63dd9bf20dc7-utilities\") pod \"community-operators-m7ncp\" (UID: \"181146be-5e90-40cd-bd8f-63dd9bf20dc7\") " pod="openshift-marketplace/community-operators-m7ncp" Jan 22 12:23:09 crc kubenswrapper[5120]: I0122 12:23:09.009873 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/181146be-5e90-40cd-bd8f-63dd9bf20dc7-utilities\") pod \"community-operators-m7ncp\" (UID: \"181146be-5e90-40cd-bd8f-63dd9bf20dc7\") " pod="openshift-marketplace/community-operators-m7ncp" Jan 22 12:23:09 crc kubenswrapper[5120]: I0122 12:23:09.010172 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/181146be-5e90-40cd-bd8f-63dd9bf20dc7-catalog-content\") pod \"community-operators-m7ncp\" (UID: \"181146be-5e90-40cd-bd8f-63dd9bf20dc7\") " pod="openshift-marketplace/community-operators-m7ncp" Jan 22 12:23:09 crc kubenswrapper[5120]: I0122 12:23:09.010374 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vgpt6\" (UniqueName: \"kubernetes.io/projected/181146be-5e90-40cd-bd8f-63dd9bf20dc7-kube-api-access-vgpt6\") pod \"community-operators-m7ncp\" (UID: \"181146be-5e90-40cd-bd8f-63dd9bf20dc7\") " pod="openshift-marketplace/community-operators-m7ncp" Jan 22 12:23:09 crc kubenswrapper[5120]: I0122 12:23:09.010573 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/181146be-5e90-40cd-bd8f-63dd9bf20dc7-utilities\") pod \"community-operators-m7ncp\" (UID: \"181146be-5e90-40cd-bd8f-63dd9bf20dc7\") " pod="openshift-marketplace/community-operators-m7ncp" Jan 22 12:23:09 crc kubenswrapper[5120]: I0122 12:23:09.010666 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/181146be-5e90-40cd-bd8f-63dd9bf20dc7-catalog-content\") pod \"community-operators-m7ncp\" (UID: \"181146be-5e90-40cd-bd8f-63dd9bf20dc7\") " pod="openshift-marketplace/community-operators-m7ncp" Jan 22 12:23:09 crc kubenswrapper[5120]: I0122 12:23:09.046608 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgpt6\" (UniqueName: \"kubernetes.io/projected/181146be-5e90-40cd-bd8f-63dd9bf20dc7-kube-api-access-vgpt6\") pod \"community-operators-m7ncp\" (UID: \"181146be-5e90-40cd-bd8f-63dd9bf20dc7\") " pod="openshift-marketplace/community-operators-m7ncp" Jan 22 12:23:09 crc kubenswrapper[5120]: I0122 12:23:09.070194 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m7ncp" Jan 22 12:23:09 crc kubenswrapper[5120]: I0122 12:23:09.541269 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m7ncp"] Jan 22 12:23:10 crc kubenswrapper[5120]: I0122 12:23:10.125190 5120 generic.go:358] "Generic (PLEG): container finished" podID="181146be-5e90-40cd-bd8f-63dd9bf20dc7" containerID="d70f3ae8f6dc4c8ca9c28d1bbd4219e78f001f6cd7ce80719951d1cacaa18b82" exitCode=0 Jan 22 12:23:10 crc kubenswrapper[5120]: I0122 12:23:10.125398 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m7ncp" event={"ID":"181146be-5e90-40cd-bd8f-63dd9bf20dc7","Type":"ContainerDied","Data":"d70f3ae8f6dc4c8ca9c28d1bbd4219e78f001f6cd7ce80719951d1cacaa18b82"} Jan 22 12:23:10 crc kubenswrapper[5120]: I0122 12:23:10.125434 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m7ncp" event={"ID":"181146be-5e90-40cd-bd8f-63dd9bf20dc7","Type":"ContainerStarted","Data":"9b088af40036e4f9a0b2f6a7b5932fb6d2a72cbf2192757269776a56fb425ec6"} Jan 22 12:23:11 crc kubenswrapper[5120]: I0122 12:23:11.133684 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m7ncp" event={"ID":"181146be-5e90-40cd-bd8f-63dd9bf20dc7","Type":"ContainerStarted","Data":"3286e63188551682f09dd95794331863f3e1ab378f00c931ac8e58768cc114a9"} Jan 22 12:23:12 crc kubenswrapper[5120]: I0122 12:23:12.151228 5120 generic.go:358] "Generic (PLEG): container finished" podID="181146be-5e90-40cd-bd8f-63dd9bf20dc7" containerID="3286e63188551682f09dd95794331863f3e1ab378f00c931ac8e58768cc114a9" exitCode=0 Jan 22 12:23:12 crc kubenswrapper[5120]: I0122 12:23:12.152032 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m7ncp" event={"ID":"181146be-5e90-40cd-bd8f-63dd9bf20dc7","Type":"ContainerDied","Data":"3286e63188551682f09dd95794331863f3e1ab378f00c931ac8e58768cc114a9"} Jan 22 12:23:13 crc kubenswrapper[5120]: I0122 12:23:13.163604 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m7ncp" event={"ID":"181146be-5e90-40cd-bd8f-63dd9bf20dc7","Type":"ContainerStarted","Data":"f30019d835fc83581f6bff42e30a63fce7e5cb4a8d787f37b9ca40d8ea0858ec"} Jan 22 12:23:19 crc kubenswrapper[5120]: I0122 12:23:19.076099 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-m7ncp" Jan 22 12:23:19 crc kubenswrapper[5120]: I0122 12:23:19.087002 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-m7ncp" Jan 22 12:23:19 crc kubenswrapper[5120]: I0122 12:23:19.132254 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-m7ncp" Jan 22 12:23:19 crc kubenswrapper[5120]: I0122 12:23:19.155264 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-m7ncp" podStartSLOduration=10.468687468 podStartE2EDuration="11.15524585s" podCreationTimestamp="2026-01-22 12:23:08 +0000 UTC" firstStartedPulling="2026-01-22 12:23:10.128841987 +0000 UTC m=+2124.872790368" lastFinishedPulling="2026-01-22 12:23:10.815400409 +0000 UTC m=+2125.559348750" observedRunningTime="2026-01-22 12:23:13.189004189 +0000 UTC m=+2127.932952540" watchObservedRunningTime="2026-01-22 12:23:19.15524585 +0000 UTC m=+2133.899194191" Jan 22 12:23:19 crc kubenswrapper[5120]: I0122 12:23:19.257939 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-m7ncp" Jan 22 12:23:22 crc kubenswrapper[5120]: I0122 12:23:22.837477 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m7ncp"] Jan 22 12:23:22 crc kubenswrapper[5120]: I0122 12:23:22.838618 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-m7ncp" podUID="181146be-5e90-40cd-bd8f-63dd9bf20dc7" containerName="registry-server" containerID="cri-o://f30019d835fc83581f6bff42e30a63fce7e5cb4a8d787f37b9ca40d8ea0858ec" gracePeriod=2 Jan 22 12:23:23 crc kubenswrapper[5120]: I0122 12:23:23.258898 5120 generic.go:358] "Generic (PLEG): container finished" podID="181146be-5e90-40cd-bd8f-63dd9bf20dc7" containerID="f30019d835fc83581f6bff42e30a63fce7e5cb4a8d787f37b9ca40d8ea0858ec" exitCode=0 Jan 22 12:23:23 crc kubenswrapper[5120]: I0122 12:23:23.258994 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m7ncp" event={"ID":"181146be-5e90-40cd-bd8f-63dd9bf20dc7","Type":"ContainerDied","Data":"f30019d835fc83581f6bff42e30a63fce7e5cb4a8d787f37b9ca40d8ea0858ec"} Jan 22 12:23:23 crc kubenswrapper[5120]: I0122 12:23:23.744858 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m7ncp" Jan 22 12:23:23 crc kubenswrapper[5120]: I0122 12:23:23.880187 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgpt6\" (UniqueName: \"kubernetes.io/projected/181146be-5e90-40cd-bd8f-63dd9bf20dc7-kube-api-access-vgpt6\") pod \"181146be-5e90-40cd-bd8f-63dd9bf20dc7\" (UID: \"181146be-5e90-40cd-bd8f-63dd9bf20dc7\") " Jan 22 12:23:23 crc kubenswrapper[5120]: I0122 12:23:23.880499 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/181146be-5e90-40cd-bd8f-63dd9bf20dc7-catalog-content\") pod \"181146be-5e90-40cd-bd8f-63dd9bf20dc7\" (UID: \"181146be-5e90-40cd-bd8f-63dd9bf20dc7\") " Jan 22 12:23:23 crc kubenswrapper[5120]: I0122 12:23:23.892234 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/181146be-5e90-40cd-bd8f-63dd9bf20dc7-kube-api-access-vgpt6" (OuterVolumeSpecName: "kube-api-access-vgpt6") pod "181146be-5e90-40cd-bd8f-63dd9bf20dc7" (UID: "181146be-5e90-40cd-bd8f-63dd9bf20dc7"). InnerVolumeSpecName "kube-api-access-vgpt6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:23:23 crc kubenswrapper[5120]: I0122 12:23:23.915237 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/181146be-5e90-40cd-bd8f-63dd9bf20dc7-utilities\") pod \"181146be-5e90-40cd-bd8f-63dd9bf20dc7\" (UID: \"181146be-5e90-40cd-bd8f-63dd9bf20dc7\") " Jan 22 12:23:23 crc kubenswrapper[5120]: I0122 12:23:23.918274 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vgpt6\" (UniqueName: \"kubernetes.io/projected/181146be-5e90-40cd-bd8f-63dd9bf20dc7-kube-api-access-vgpt6\") on node \"crc\" DevicePath \"\"" Jan 22 12:23:23 crc kubenswrapper[5120]: I0122 12:23:23.929048 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/181146be-5e90-40cd-bd8f-63dd9bf20dc7-utilities" (OuterVolumeSpecName: "utilities") pod "181146be-5e90-40cd-bd8f-63dd9bf20dc7" (UID: "181146be-5e90-40cd-bd8f-63dd9bf20dc7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:23:23 crc kubenswrapper[5120]: I0122 12:23:23.986554 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/181146be-5e90-40cd-bd8f-63dd9bf20dc7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "181146be-5e90-40cd-bd8f-63dd9bf20dc7" (UID: "181146be-5e90-40cd-bd8f-63dd9bf20dc7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:23:24 crc kubenswrapper[5120]: I0122 12:23:24.019380 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/181146be-5e90-40cd-bd8f-63dd9bf20dc7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 12:23:24 crc kubenswrapper[5120]: I0122 12:23:24.019416 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/181146be-5e90-40cd-bd8f-63dd9bf20dc7-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 12:23:24 crc kubenswrapper[5120]: I0122 12:23:24.270411 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m7ncp" Jan 22 12:23:24 crc kubenswrapper[5120]: I0122 12:23:24.270404 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m7ncp" event={"ID":"181146be-5e90-40cd-bd8f-63dd9bf20dc7","Type":"ContainerDied","Data":"9b088af40036e4f9a0b2f6a7b5932fb6d2a72cbf2192757269776a56fb425ec6"} Jan 22 12:23:24 crc kubenswrapper[5120]: I0122 12:23:24.270472 5120 scope.go:117] "RemoveContainer" containerID="f30019d835fc83581f6bff42e30a63fce7e5cb4a8d787f37b9ca40d8ea0858ec" Jan 22 12:23:24 crc kubenswrapper[5120]: I0122 12:23:24.300781 5120 scope.go:117] "RemoveContainer" containerID="3286e63188551682f09dd95794331863f3e1ab378f00c931ac8e58768cc114a9" Jan 22 12:23:24 crc kubenswrapper[5120]: I0122 12:23:24.314039 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m7ncp"] Jan 22 12:23:24 crc kubenswrapper[5120]: I0122 12:23:24.323026 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-m7ncp"] Jan 22 12:23:24 crc kubenswrapper[5120]: I0122 12:23:24.332390 5120 scope.go:117] "RemoveContainer" containerID="d70f3ae8f6dc4c8ca9c28d1bbd4219e78f001f6cd7ce80719951d1cacaa18b82" Jan 22 12:23:25 crc kubenswrapper[5120]: I0122 12:23:25.607150 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="181146be-5e90-40cd-bd8f-63dd9bf20dc7" path="/var/lib/kubelet/pods/181146be-5e90-40cd-bd8f-63dd9bf20dc7/volumes" Jan 22 12:24:00 crc kubenswrapper[5120]: I0122 12:24:00.156493 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29484744-7g58z"] Jan 22 12:24:00 crc kubenswrapper[5120]: I0122 12:24:00.159051 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="181146be-5e90-40cd-bd8f-63dd9bf20dc7" containerName="extract-content" Jan 22 12:24:00 crc kubenswrapper[5120]: I0122 12:24:00.159081 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="181146be-5e90-40cd-bd8f-63dd9bf20dc7" containerName="extract-content" Jan 22 12:24:00 crc kubenswrapper[5120]: I0122 12:24:00.159132 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="181146be-5e90-40cd-bd8f-63dd9bf20dc7" containerName="registry-server" Jan 22 12:24:00 crc kubenswrapper[5120]: I0122 12:24:00.159144 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="181146be-5e90-40cd-bd8f-63dd9bf20dc7" containerName="registry-server" Jan 22 12:24:00 crc kubenswrapper[5120]: I0122 12:24:00.159167 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="181146be-5e90-40cd-bd8f-63dd9bf20dc7" containerName="extract-utilities" Jan 22 12:24:00 crc kubenswrapper[5120]: I0122 12:24:00.159180 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="181146be-5e90-40cd-bd8f-63dd9bf20dc7" containerName="extract-utilities" Jan 22 12:24:00 crc kubenswrapper[5120]: I0122 12:24:00.159477 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="181146be-5e90-40cd-bd8f-63dd9bf20dc7" containerName="registry-server" Jan 22 12:24:00 crc kubenswrapper[5120]: I0122 12:24:00.183523 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484744-7g58z"] Jan 22 12:24:00 crc kubenswrapper[5120]: I0122 12:24:00.183772 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484744-7g58z" Jan 22 12:24:00 crc kubenswrapper[5120]: I0122 12:24:00.188335 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g2chw\"" Jan 22 12:24:00 crc kubenswrapper[5120]: I0122 12:24:00.191937 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 22 12:24:00 crc kubenswrapper[5120]: I0122 12:24:00.193906 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 22 12:24:00 crc kubenswrapper[5120]: I0122 12:24:00.237695 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k44cv\" (UniqueName: \"kubernetes.io/projected/9cde2753-8f27-404a-8fbc-d297e718b3b8-kube-api-access-k44cv\") pod \"auto-csr-approver-29484744-7g58z\" (UID: \"9cde2753-8f27-404a-8fbc-d297e718b3b8\") " pod="openshift-infra/auto-csr-approver-29484744-7g58z" Jan 22 12:24:00 crc kubenswrapper[5120]: I0122 12:24:00.340033 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-k44cv\" (UniqueName: \"kubernetes.io/projected/9cde2753-8f27-404a-8fbc-d297e718b3b8-kube-api-access-k44cv\") pod \"auto-csr-approver-29484744-7g58z\" (UID: \"9cde2753-8f27-404a-8fbc-d297e718b3b8\") " pod="openshift-infra/auto-csr-approver-29484744-7g58z" Jan 22 12:24:00 crc kubenswrapper[5120]: I0122 12:24:00.365678 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-k44cv\" (UniqueName: \"kubernetes.io/projected/9cde2753-8f27-404a-8fbc-d297e718b3b8-kube-api-access-k44cv\") pod \"auto-csr-approver-29484744-7g58z\" (UID: \"9cde2753-8f27-404a-8fbc-d297e718b3b8\") " pod="openshift-infra/auto-csr-approver-29484744-7g58z" Jan 22 12:24:00 crc kubenswrapper[5120]: I0122 12:24:00.504939 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484744-7g58z" Jan 22 12:24:01 crc kubenswrapper[5120]: W0122 12:24:01.001097 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9cde2753_8f27_404a_8fbc_d297e718b3b8.slice/crio-54d7e099ba1a58235caba17b1537e1adc610f767817d2f9845eae2f46eba12f1 WatchSource:0}: Error finding container 54d7e099ba1a58235caba17b1537e1adc610f767817d2f9845eae2f46eba12f1: Status 404 returned error can't find the container with id 54d7e099ba1a58235caba17b1537e1adc610f767817d2f9845eae2f46eba12f1 Jan 22 12:24:01 crc kubenswrapper[5120]: I0122 12:24:01.010782 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484744-7g58z"] Jan 22 12:24:01 crc kubenswrapper[5120]: I0122 12:24:01.641163 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484744-7g58z" event={"ID":"9cde2753-8f27-404a-8fbc-d297e718b3b8","Type":"ContainerStarted","Data":"54d7e099ba1a58235caba17b1537e1adc610f767817d2f9845eae2f46eba12f1"} Jan 22 12:24:02 crc kubenswrapper[5120]: I0122 12:24:02.651334 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484744-7g58z" event={"ID":"9cde2753-8f27-404a-8fbc-d297e718b3b8","Type":"ContainerStarted","Data":"265c28387fd25a8a35e27895239a66ae8d41b785dc39bc594bbfbfd15a6f5f83"} Jan 22 12:24:02 crc kubenswrapper[5120]: I0122 12:24:02.678853 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29484744-7g58z" podStartSLOduration=1.548985007 podStartE2EDuration="2.678820887s" podCreationTimestamp="2026-01-22 12:24:00 +0000 UTC" firstStartedPulling="2026-01-22 12:24:01.004683347 +0000 UTC m=+2175.748631728" lastFinishedPulling="2026-01-22 12:24:02.134519267 +0000 UTC m=+2176.878467608" observedRunningTime="2026-01-22 12:24:02.669702902 +0000 UTC m=+2177.413651263" watchObservedRunningTime="2026-01-22 12:24:02.678820887 +0000 UTC m=+2177.422769228" Jan 22 12:24:03 crc kubenswrapper[5120]: I0122 12:24:03.663354 5120 generic.go:358] "Generic (PLEG): container finished" podID="9cde2753-8f27-404a-8fbc-d297e718b3b8" containerID="265c28387fd25a8a35e27895239a66ae8d41b785dc39bc594bbfbfd15a6f5f83" exitCode=0 Jan 22 12:24:03 crc kubenswrapper[5120]: I0122 12:24:03.663479 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484744-7g58z" event={"ID":"9cde2753-8f27-404a-8fbc-d297e718b3b8","Type":"ContainerDied","Data":"265c28387fd25a8a35e27895239a66ae8d41b785dc39bc594bbfbfd15a6f5f83"} Jan 22 12:24:04 crc kubenswrapper[5120]: I0122 12:24:04.977383 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484744-7g58z" Jan 22 12:24:05 crc kubenswrapper[5120]: I0122 12:24:05.139566 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k44cv\" (UniqueName: \"kubernetes.io/projected/9cde2753-8f27-404a-8fbc-d297e718b3b8-kube-api-access-k44cv\") pod \"9cde2753-8f27-404a-8fbc-d297e718b3b8\" (UID: \"9cde2753-8f27-404a-8fbc-d297e718b3b8\") " Jan 22 12:24:05 crc kubenswrapper[5120]: I0122 12:24:05.150124 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cde2753-8f27-404a-8fbc-d297e718b3b8-kube-api-access-k44cv" (OuterVolumeSpecName: "kube-api-access-k44cv") pod "9cde2753-8f27-404a-8fbc-d297e718b3b8" (UID: "9cde2753-8f27-404a-8fbc-d297e718b3b8"). InnerVolumeSpecName "kube-api-access-k44cv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:24:05 crc kubenswrapper[5120]: I0122 12:24:05.247625 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-k44cv\" (UniqueName: \"kubernetes.io/projected/9cde2753-8f27-404a-8fbc-d297e718b3b8-kube-api-access-k44cv\") on node \"crc\" DevicePath \"\"" Jan 22 12:24:05 crc kubenswrapper[5120]: I0122 12:24:05.694550 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484744-7g58z" Jan 22 12:24:05 crc kubenswrapper[5120]: I0122 12:24:05.694567 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484744-7g58z" event={"ID":"9cde2753-8f27-404a-8fbc-d297e718b3b8","Type":"ContainerDied","Data":"54d7e099ba1a58235caba17b1537e1adc610f767817d2f9845eae2f46eba12f1"} Jan 22 12:24:05 crc kubenswrapper[5120]: I0122 12:24:05.694644 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54d7e099ba1a58235caba17b1537e1adc610f767817d2f9845eae2f46eba12f1" Jan 22 12:24:05 crc kubenswrapper[5120]: I0122 12:24:05.759987 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29484738-tfzpk"] Jan 22 12:24:05 crc kubenswrapper[5120]: I0122 12:24:05.768983 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29484738-tfzpk"] Jan 22 12:24:07 crc kubenswrapper[5120]: I0122 12:24:07.590925 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f97383a0-beb0-4ff9-a965-28e0e9b1addb" path="/var/lib/kubelet/pods/f97383a0-beb0-4ff9-a965-28e0e9b1addb/volumes" Jan 22 12:24:31 crc kubenswrapper[5120]: I0122 12:24:31.973037 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:24:31 crc kubenswrapper[5120]: I0122 12:24:31.973951 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:25:01 crc kubenswrapper[5120]: I0122 12:25:01.972984 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:25:01 crc kubenswrapper[5120]: I0122 12:25:01.974004 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:25:02 crc kubenswrapper[5120]: I0122 12:25:02.229030 5120 scope.go:117] "RemoveContainer" containerID="5130cc2c660ed67d488de9c861af0f840a6694cd424858313d97ed3425c416ca" Jan 22 12:25:31 crc kubenswrapper[5120]: I0122 12:25:31.973594 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:25:31 crc kubenswrapper[5120]: I0122 12:25:31.977213 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:25:31 crc kubenswrapper[5120]: I0122 12:25:31.977486 5120 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dq269" Jan 22 12:25:31 crc kubenswrapper[5120]: I0122 12:25:31.978768 5120 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e853360e55cf5a442f891e5c045632b5fe91a8840293356f1cb5a89ddebe318b"} pod="openshift-machine-config-operator/machine-config-daemon-dq269" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 12:25:31 crc kubenswrapper[5120]: I0122 12:25:31.978987 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" containerID="cri-o://e853360e55cf5a442f891e5c045632b5fe91a8840293356f1cb5a89ddebe318b" gracePeriod=600 Jan 22 12:25:33 crc kubenswrapper[5120]: I0122 12:25:33.117103 5120 generic.go:358] "Generic (PLEG): container finished" podID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerID="e853360e55cf5a442f891e5c045632b5fe91a8840293356f1cb5a89ddebe318b" exitCode=0 Jan 22 12:25:33 crc kubenswrapper[5120]: I0122 12:25:33.117336 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" event={"ID":"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9","Type":"ContainerDied","Data":"e853360e55cf5a442f891e5c045632b5fe91a8840293356f1cb5a89ddebe318b"} Jan 22 12:25:33 crc kubenswrapper[5120]: I0122 12:25:33.120247 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" event={"ID":"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9","Type":"ContainerStarted","Data":"f2643d6719d898899b7fe441e6374794306e1af141db7ee92ac8d42af384da07"} Jan 22 12:25:33 crc kubenswrapper[5120]: I0122 12:25:33.120335 5120 scope.go:117] "RemoveContainer" containerID="eda097a757f91e81d87c633c172aa5f1c9e7f79ccd5da35f6dbb6ffc692dc58d" Jan 22 12:26:00 crc kubenswrapper[5120]: I0122 12:26:00.148235 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29484746-xsmfp"] Jan 22 12:26:00 crc kubenswrapper[5120]: I0122 12:26:00.150883 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9cde2753-8f27-404a-8fbc-d297e718b3b8" containerName="oc" Jan 22 12:26:00 crc kubenswrapper[5120]: I0122 12:26:00.150904 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cde2753-8f27-404a-8fbc-d297e718b3b8" containerName="oc" Jan 22 12:26:00 crc kubenswrapper[5120]: I0122 12:26:00.151119 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="9cde2753-8f27-404a-8fbc-d297e718b3b8" containerName="oc" Jan 22 12:26:00 crc kubenswrapper[5120]: I0122 12:26:00.159389 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484746-xsmfp" Jan 22 12:26:00 crc kubenswrapper[5120]: I0122 12:26:00.166118 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g2chw\"" Jan 22 12:26:00 crc kubenswrapper[5120]: I0122 12:26:00.166444 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 22 12:26:00 crc kubenswrapper[5120]: I0122 12:26:00.166781 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 22 12:26:00 crc kubenswrapper[5120]: I0122 12:26:00.170659 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484746-xsmfp"] Jan 22 12:26:00 crc kubenswrapper[5120]: I0122 12:26:00.242200 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc9qg\" (UniqueName: \"kubernetes.io/projected/7cba6b62-807f-4e37-b350-bc4eef13747b-kube-api-access-zc9qg\") pod \"auto-csr-approver-29484746-xsmfp\" (UID: \"7cba6b62-807f-4e37-b350-bc4eef13747b\") " pod="openshift-infra/auto-csr-approver-29484746-xsmfp" Jan 22 12:26:00 crc kubenswrapper[5120]: I0122 12:26:00.344890 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zc9qg\" (UniqueName: \"kubernetes.io/projected/7cba6b62-807f-4e37-b350-bc4eef13747b-kube-api-access-zc9qg\") pod \"auto-csr-approver-29484746-xsmfp\" (UID: \"7cba6b62-807f-4e37-b350-bc4eef13747b\") " pod="openshift-infra/auto-csr-approver-29484746-xsmfp" Jan 22 12:26:00 crc kubenswrapper[5120]: I0122 12:26:00.398011 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zc9qg\" (UniqueName: \"kubernetes.io/projected/7cba6b62-807f-4e37-b350-bc4eef13747b-kube-api-access-zc9qg\") pod \"auto-csr-approver-29484746-xsmfp\" (UID: \"7cba6b62-807f-4e37-b350-bc4eef13747b\") " pod="openshift-infra/auto-csr-approver-29484746-xsmfp" Jan 22 12:26:00 crc kubenswrapper[5120]: I0122 12:26:00.491232 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484746-xsmfp" Jan 22 12:26:01 crc kubenswrapper[5120]: I0122 12:26:01.015111 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484746-xsmfp"] Jan 22 12:26:01 crc kubenswrapper[5120]: I0122 12:26:01.022365 5120 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 12:26:01 crc kubenswrapper[5120]: I0122 12:26:01.430020 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484746-xsmfp" event={"ID":"7cba6b62-807f-4e37-b350-bc4eef13747b","Type":"ContainerStarted","Data":"743d82ca419ab80bcf1ec658824f6c99ac6ac74f8bc9a95079e00f3eb1a56da0"} Jan 22 12:26:02 crc kubenswrapper[5120]: I0122 12:26:02.441052 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484746-xsmfp" event={"ID":"7cba6b62-807f-4e37-b350-bc4eef13747b","Type":"ContainerStarted","Data":"6b1f924c30425523c67b96c181b3a024d387e63c67e79368cd5fa28556694ba7"} Jan 22 12:26:02 crc kubenswrapper[5120]: I0122 12:26:02.465720 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29484746-xsmfp" podStartSLOduration=1.608265412 podStartE2EDuration="2.465697299s" podCreationTimestamp="2026-01-22 12:26:00 +0000 UTC" firstStartedPulling="2026-01-22 12:26:01.022540988 +0000 UTC m=+2295.766489329" lastFinishedPulling="2026-01-22 12:26:01.879972865 +0000 UTC m=+2296.623921216" observedRunningTime="2026-01-22 12:26:02.459146757 +0000 UTC m=+2297.203095118" watchObservedRunningTime="2026-01-22 12:26:02.465697299 +0000 UTC m=+2297.209645640" Jan 22 12:26:03 crc kubenswrapper[5120]: I0122 12:26:03.452648 5120 generic.go:358] "Generic (PLEG): container finished" podID="7cba6b62-807f-4e37-b350-bc4eef13747b" containerID="6b1f924c30425523c67b96c181b3a024d387e63c67e79368cd5fa28556694ba7" exitCode=0 Jan 22 12:26:03 crc kubenswrapper[5120]: I0122 12:26:03.452830 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484746-xsmfp" event={"ID":"7cba6b62-807f-4e37-b350-bc4eef13747b","Type":"ContainerDied","Data":"6b1f924c30425523c67b96c181b3a024d387e63c67e79368cd5fa28556694ba7"} Jan 22 12:26:04 crc kubenswrapper[5120]: I0122 12:26:04.810141 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484746-xsmfp" Jan 22 12:26:04 crc kubenswrapper[5120]: I0122 12:26:04.928983 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zc9qg\" (UniqueName: \"kubernetes.io/projected/7cba6b62-807f-4e37-b350-bc4eef13747b-kube-api-access-zc9qg\") pod \"7cba6b62-807f-4e37-b350-bc4eef13747b\" (UID: \"7cba6b62-807f-4e37-b350-bc4eef13747b\") " Jan 22 12:26:04 crc kubenswrapper[5120]: I0122 12:26:04.938198 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cba6b62-807f-4e37-b350-bc4eef13747b-kube-api-access-zc9qg" (OuterVolumeSpecName: "kube-api-access-zc9qg") pod "7cba6b62-807f-4e37-b350-bc4eef13747b" (UID: "7cba6b62-807f-4e37-b350-bc4eef13747b"). InnerVolumeSpecName "kube-api-access-zc9qg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:26:05 crc kubenswrapper[5120]: I0122 12:26:05.031710 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zc9qg\" (UniqueName: \"kubernetes.io/projected/7cba6b62-807f-4e37-b350-bc4eef13747b-kube-api-access-zc9qg\") on node \"crc\" DevicePath \"\"" Jan 22 12:26:05 crc kubenswrapper[5120]: I0122 12:26:05.473281 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484746-xsmfp" event={"ID":"7cba6b62-807f-4e37-b350-bc4eef13747b","Type":"ContainerDied","Data":"743d82ca419ab80bcf1ec658824f6c99ac6ac74f8bc9a95079e00f3eb1a56da0"} Jan 22 12:26:05 crc kubenswrapper[5120]: I0122 12:26:05.473358 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="743d82ca419ab80bcf1ec658824f6c99ac6ac74f8bc9a95079e00f3eb1a56da0" Jan 22 12:26:05 crc kubenswrapper[5120]: I0122 12:26:05.473477 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484746-xsmfp" Jan 22 12:26:05 crc kubenswrapper[5120]: I0122 12:26:05.545615 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29484740-pq7hx"] Jan 22 12:26:05 crc kubenswrapper[5120]: I0122 12:26:05.552324 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29484740-pq7hx"] Jan 22 12:26:05 crc kubenswrapper[5120]: I0122 12:26:05.583132 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6609faf3-2234-4edf-96b2-132b3e0c23c4" path="/var/lib/kubelet/pods/6609faf3-2234-4edf-96b2-132b3e0c23c4/volumes" Jan 22 12:27:02 crc kubenswrapper[5120]: I0122 12:27:02.443025 5120 scope.go:117] "RemoveContainer" containerID="be0e7176f01a842ccbd6627161b56398b3ffe33051efd8876db22a192b4801d2" Jan 22 12:27:51 crc kubenswrapper[5120]: I0122 12:27:51.631918 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4lzht_67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087/kube-multus/0.log" Jan 22 12:27:51 crc kubenswrapper[5120]: I0122 12:27:51.633795 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4lzht_67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087/kube-multus/0.log" Jan 22 12:27:51 crc kubenswrapper[5120]: I0122 12:27:51.645736 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 22 12:27:51 crc kubenswrapper[5120]: I0122 12:27:51.645854 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 22 12:28:00 crc kubenswrapper[5120]: I0122 12:28:00.168684 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29484748-kjqpj"] Jan 22 12:28:00 crc kubenswrapper[5120]: I0122 12:28:00.171652 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7cba6b62-807f-4e37-b350-bc4eef13747b" containerName="oc" Jan 22 12:28:00 crc kubenswrapper[5120]: I0122 12:28:00.171683 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cba6b62-807f-4e37-b350-bc4eef13747b" containerName="oc" Jan 22 12:28:00 crc kubenswrapper[5120]: I0122 12:28:00.172055 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="7cba6b62-807f-4e37-b350-bc4eef13747b" containerName="oc" Jan 22 12:28:00 crc kubenswrapper[5120]: I0122 12:28:00.189925 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484748-kjqpj"] Jan 22 12:28:00 crc kubenswrapper[5120]: I0122 12:28:00.190627 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484748-kjqpj" Jan 22 12:28:00 crc kubenswrapper[5120]: I0122 12:28:00.195619 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 22 12:28:00 crc kubenswrapper[5120]: I0122 12:28:00.195732 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g2chw\"" Jan 22 12:28:00 crc kubenswrapper[5120]: I0122 12:28:00.196021 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 22 12:28:00 crc kubenswrapper[5120]: I0122 12:28:00.344530 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lsvp\" (UniqueName: \"kubernetes.io/projected/19e6cf90-948d-4188-8603-4f42f5a2400e-kube-api-access-2lsvp\") pod \"auto-csr-approver-29484748-kjqpj\" (UID: \"19e6cf90-948d-4188-8603-4f42f5a2400e\") " pod="openshift-infra/auto-csr-approver-29484748-kjqpj" Jan 22 12:28:00 crc kubenswrapper[5120]: I0122 12:28:00.447520 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2lsvp\" (UniqueName: \"kubernetes.io/projected/19e6cf90-948d-4188-8603-4f42f5a2400e-kube-api-access-2lsvp\") pod \"auto-csr-approver-29484748-kjqpj\" (UID: \"19e6cf90-948d-4188-8603-4f42f5a2400e\") " pod="openshift-infra/auto-csr-approver-29484748-kjqpj" Jan 22 12:28:00 crc kubenswrapper[5120]: I0122 12:28:00.475815 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lsvp\" (UniqueName: \"kubernetes.io/projected/19e6cf90-948d-4188-8603-4f42f5a2400e-kube-api-access-2lsvp\") pod \"auto-csr-approver-29484748-kjqpj\" (UID: \"19e6cf90-948d-4188-8603-4f42f5a2400e\") " pod="openshift-infra/auto-csr-approver-29484748-kjqpj" Jan 22 12:28:00 crc kubenswrapper[5120]: I0122 12:28:00.519833 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484748-kjqpj" Jan 22 12:28:00 crc kubenswrapper[5120]: I0122 12:28:00.792843 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484748-kjqpj"] Jan 22 12:28:01 crc kubenswrapper[5120]: I0122 12:28:01.767690 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484748-kjqpj" event={"ID":"19e6cf90-948d-4188-8603-4f42f5a2400e","Type":"ContainerStarted","Data":"d4d5767ff14553fb1a7f4452986ce02e5768fe2d919221b289de11c8ceb561c2"} Jan 22 12:28:01 crc kubenswrapper[5120]: I0122 12:28:01.972235 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:28:01 crc kubenswrapper[5120]: I0122 12:28:01.972520 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:28:02 crc kubenswrapper[5120]: I0122 12:28:02.779106 5120 generic.go:358] "Generic (PLEG): container finished" podID="19e6cf90-948d-4188-8603-4f42f5a2400e" containerID="87dcaa48bc692cdf9ab6041cfa08659e3160bb8e1c6b034284ede8cacd86f655" exitCode=0 Jan 22 12:28:02 crc kubenswrapper[5120]: I0122 12:28:02.779178 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484748-kjqpj" event={"ID":"19e6cf90-948d-4188-8603-4f42f5a2400e","Type":"ContainerDied","Data":"87dcaa48bc692cdf9ab6041cfa08659e3160bb8e1c6b034284ede8cacd86f655"} Jan 22 12:28:04 crc kubenswrapper[5120]: I0122 12:28:04.199052 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484748-kjqpj" Jan 22 12:28:04 crc kubenswrapper[5120]: I0122 12:28:04.261769 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lsvp\" (UniqueName: \"kubernetes.io/projected/19e6cf90-948d-4188-8603-4f42f5a2400e-kube-api-access-2lsvp\") pod \"19e6cf90-948d-4188-8603-4f42f5a2400e\" (UID: \"19e6cf90-948d-4188-8603-4f42f5a2400e\") " Jan 22 12:28:04 crc kubenswrapper[5120]: I0122 12:28:04.275478 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19e6cf90-948d-4188-8603-4f42f5a2400e-kube-api-access-2lsvp" (OuterVolumeSpecName: "kube-api-access-2lsvp") pod "19e6cf90-948d-4188-8603-4f42f5a2400e" (UID: "19e6cf90-948d-4188-8603-4f42f5a2400e"). InnerVolumeSpecName "kube-api-access-2lsvp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:28:04 crc kubenswrapper[5120]: I0122 12:28:04.364044 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2lsvp\" (UniqueName: \"kubernetes.io/projected/19e6cf90-948d-4188-8603-4f42f5a2400e-kube-api-access-2lsvp\") on node \"crc\" DevicePath \"\"" Jan 22 12:28:04 crc kubenswrapper[5120]: I0122 12:28:04.803350 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484748-kjqpj" event={"ID":"19e6cf90-948d-4188-8603-4f42f5a2400e","Type":"ContainerDied","Data":"d4d5767ff14553fb1a7f4452986ce02e5768fe2d919221b289de11c8ceb561c2"} Jan 22 12:28:04 crc kubenswrapper[5120]: I0122 12:28:04.803462 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4d5767ff14553fb1a7f4452986ce02e5768fe2d919221b289de11c8ceb561c2" Jan 22 12:28:04 crc kubenswrapper[5120]: I0122 12:28:04.804150 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484748-kjqpj" Jan 22 12:28:05 crc kubenswrapper[5120]: I0122 12:28:05.313167 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29484742-4b4pf"] Jan 22 12:28:05 crc kubenswrapper[5120]: I0122 12:28:05.324607 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29484742-4b4pf"] Jan 22 12:28:05 crc kubenswrapper[5120]: I0122 12:28:05.597889 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bfb1f97-ca93-4138-99d0-06fcb09ba8f5" path="/var/lib/kubelet/pods/4bfb1f97-ca93-4138-99d0-06fcb09ba8f5/volumes" Jan 22 12:28:31 crc kubenswrapper[5120]: I0122 12:28:31.973524 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:28:31 crc kubenswrapper[5120]: I0122 12:28:31.974482 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:29:01 crc kubenswrapper[5120]: I0122 12:29:01.973466 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:29:01 crc kubenswrapper[5120]: I0122 12:29:01.974393 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:29:01 crc kubenswrapper[5120]: I0122 12:29:01.974448 5120 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dq269" Jan 22 12:29:01 crc kubenswrapper[5120]: I0122 12:29:01.975694 5120 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f2643d6719d898899b7fe441e6374794306e1af141db7ee92ac8d42af384da07"} pod="openshift-machine-config-operator/machine-config-daemon-dq269" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 12:29:01 crc kubenswrapper[5120]: I0122 12:29:01.975762 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" containerID="cri-o://f2643d6719d898899b7fe441e6374794306e1af141db7ee92ac8d42af384da07" gracePeriod=600 Jan 22 12:29:02 crc kubenswrapper[5120]: E0122 12:29:02.142552 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:29:02 crc kubenswrapper[5120]: I0122 12:29:02.465311 5120 generic.go:358] "Generic (PLEG): container finished" podID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerID="f2643d6719d898899b7fe441e6374794306e1af141db7ee92ac8d42af384da07" exitCode=0 Jan 22 12:29:02 crc kubenswrapper[5120]: I0122 12:29:02.465581 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" event={"ID":"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9","Type":"ContainerDied","Data":"f2643d6719d898899b7fe441e6374794306e1af141db7ee92ac8d42af384da07"} Jan 22 12:29:02 crc kubenswrapper[5120]: I0122 12:29:02.465647 5120 scope.go:117] "RemoveContainer" containerID="e853360e55cf5a442f891e5c045632b5fe91a8840293356f1cb5a89ddebe318b" Jan 22 12:29:02 crc kubenswrapper[5120]: I0122 12:29:02.466635 5120 scope.go:117] "RemoveContainer" containerID="f2643d6719d898899b7fe441e6374794306e1af141db7ee92ac8d42af384da07" Jan 22 12:29:02 crc kubenswrapper[5120]: E0122 12:29:02.467157 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:29:02 crc kubenswrapper[5120]: I0122 12:29:02.638634 5120 scope.go:117] "RemoveContainer" containerID="db11fbf4c05e98a727f7dde0c0bea3704c2e71605b0732b118ce9ceec98d8a9e" Jan 22 12:29:13 crc kubenswrapper[5120]: I0122 12:29:13.573677 5120 scope.go:117] "RemoveContainer" containerID="f2643d6719d898899b7fe441e6374794306e1af141db7ee92ac8d42af384da07" Jan 22 12:29:13 crc kubenswrapper[5120]: E0122 12:29:13.574878 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:29:26 crc kubenswrapper[5120]: I0122 12:29:26.572223 5120 scope.go:117] "RemoveContainer" containerID="f2643d6719d898899b7fe441e6374794306e1af141db7ee92ac8d42af384da07" Jan 22 12:29:26 crc kubenswrapper[5120]: E0122 12:29:26.573690 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:29:40 crc kubenswrapper[5120]: I0122 12:29:40.572317 5120 scope.go:117] "RemoveContainer" containerID="f2643d6719d898899b7fe441e6374794306e1af141db7ee92ac8d42af384da07" Jan 22 12:29:40 crc kubenswrapper[5120]: E0122 12:29:40.573739 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:29:54 crc kubenswrapper[5120]: I0122 12:29:54.572081 5120 scope.go:117] "RemoveContainer" containerID="f2643d6719d898899b7fe441e6374794306e1af141db7ee92ac8d42af384da07" Jan 22 12:29:54 crc kubenswrapper[5120]: E0122 12:29:54.573898 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:30:00 crc kubenswrapper[5120]: I0122 12:30:00.162082 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29484750-sqt7t"] Jan 22 12:30:00 crc kubenswrapper[5120]: I0122 12:30:00.163891 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="19e6cf90-948d-4188-8603-4f42f5a2400e" containerName="oc" Jan 22 12:30:00 crc kubenswrapper[5120]: I0122 12:30:00.163910 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="19e6cf90-948d-4188-8603-4f42f5a2400e" containerName="oc" Jan 22 12:30:00 crc kubenswrapper[5120]: I0122 12:30:00.164187 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="19e6cf90-948d-4188-8603-4f42f5a2400e" containerName="oc" Jan 22 12:30:00 crc kubenswrapper[5120]: I0122 12:30:00.176545 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484750-sqt7t" Jan 22 12:30:00 crc kubenswrapper[5120]: I0122 12:30:00.179605 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 22 12:30:00 crc kubenswrapper[5120]: I0122 12:30:00.180597 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g2chw\"" Jan 22 12:30:00 crc kubenswrapper[5120]: I0122 12:30:00.181245 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484750-jzzch"] Jan 22 12:30:00 crc kubenswrapper[5120]: I0122 12:30:00.182121 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 22 12:30:00 crc kubenswrapper[5120]: I0122 12:30:00.189945 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484750-sqt7t"] Jan 22 12:30:00 crc kubenswrapper[5120]: I0122 12:30:00.190181 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484750-jzzch" Jan 22 12:30:00 crc kubenswrapper[5120]: I0122 12:30:00.194427 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 22 12:30:00 crc kubenswrapper[5120]: I0122 12:30:00.194732 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 22 12:30:00 crc kubenswrapper[5120]: I0122 12:30:00.197302 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484750-jzzch"] Jan 22 12:30:00 crc kubenswrapper[5120]: I0122 12:30:00.354345 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xt75m\" (UniqueName: \"kubernetes.io/projected/7aba8941-2ebf-4bf6-94ab-a1b999b2366a-kube-api-access-xt75m\") pod \"collect-profiles-29484750-jzzch\" (UID: \"7aba8941-2ebf-4bf6-94ab-a1b999b2366a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484750-jzzch" Jan 22 12:30:00 crc kubenswrapper[5120]: I0122 12:30:00.354419 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtjsf\" (UniqueName: \"kubernetes.io/projected/3f073b85-c7cf-489a-8e89-7bf6bc9a2124-kube-api-access-mtjsf\") pod \"auto-csr-approver-29484750-sqt7t\" (UID: \"3f073b85-c7cf-489a-8e89-7bf6bc9a2124\") " pod="openshift-infra/auto-csr-approver-29484750-sqt7t" Jan 22 12:30:00 crc kubenswrapper[5120]: I0122 12:30:00.354590 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7aba8941-2ebf-4bf6-94ab-a1b999b2366a-config-volume\") pod \"collect-profiles-29484750-jzzch\" (UID: \"7aba8941-2ebf-4bf6-94ab-a1b999b2366a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484750-jzzch" Jan 22 12:30:00 crc kubenswrapper[5120]: I0122 12:30:00.354809 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7aba8941-2ebf-4bf6-94ab-a1b999b2366a-secret-volume\") pod \"collect-profiles-29484750-jzzch\" (UID: \"7aba8941-2ebf-4bf6-94ab-a1b999b2366a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484750-jzzch" Jan 22 12:30:00 crc kubenswrapper[5120]: I0122 12:30:00.456612 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7aba8941-2ebf-4bf6-94ab-a1b999b2366a-config-volume\") pod \"collect-profiles-29484750-jzzch\" (UID: \"7aba8941-2ebf-4bf6-94ab-a1b999b2366a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484750-jzzch" Jan 22 12:30:00 crc kubenswrapper[5120]: I0122 12:30:00.456720 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7aba8941-2ebf-4bf6-94ab-a1b999b2366a-secret-volume\") pod \"collect-profiles-29484750-jzzch\" (UID: \"7aba8941-2ebf-4bf6-94ab-a1b999b2366a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484750-jzzch" Jan 22 12:30:00 crc kubenswrapper[5120]: I0122 12:30:00.456878 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xt75m\" (UniqueName: \"kubernetes.io/projected/7aba8941-2ebf-4bf6-94ab-a1b999b2366a-kube-api-access-xt75m\") pod \"collect-profiles-29484750-jzzch\" (UID: \"7aba8941-2ebf-4bf6-94ab-a1b999b2366a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484750-jzzch" Jan 22 12:30:00 crc kubenswrapper[5120]: I0122 12:30:00.456932 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mtjsf\" (UniqueName: \"kubernetes.io/projected/3f073b85-c7cf-489a-8e89-7bf6bc9a2124-kube-api-access-mtjsf\") pod \"auto-csr-approver-29484750-sqt7t\" (UID: \"3f073b85-c7cf-489a-8e89-7bf6bc9a2124\") " pod="openshift-infra/auto-csr-approver-29484750-sqt7t" Jan 22 12:30:00 crc kubenswrapper[5120]: I0122 12:30:00.459047 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7aba8941-2ebf-4bf6-94ab-a1b999b2366a-config-volume\") pod \"collect-profiles-29484750-jzzch\" (UID: \"7aba8941-2ebf-4bf6-94ab-a1b999b2366a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484750-jzzch" Jan 22 12:30:00 crc kubenswrapper[5120]: I0122 12:30:00.493771 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7aba8941-2ebf-4bf6-94ab-a1b999b2366a-secret-volume\") pod \"collect-profiles-29484750-jzzch\" (UID: \"7aba8941-2ebf-4bf6-94ab-a1b999b2366a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484750-jzzch" Jan 22 12:30:00 crc kubenswrapper[5120]: I0122 12:30:00.507479 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtjsf\" (UniqueName: \"kubernetes.io/projected/3f073b85-c7cf-489a-8e89-7bf6bc9a2124-kube-api-access-mtjsf\") pod \"auto-csr-approver-29484750-sqt7t\" (UID: \"3f073b85-c7cf-489a-8e89-7bf6bc9a2124\") " pod="openshift-infra/auto-csr-approver-29484750-sqt7t" Jan 22 12:30:00 crc kubenswrapper[5120]: I0122 12:30:00.509100 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484750-sqt7t" Jan 22 12:30:00 crc kubenswrapper[5120]: I0122 12:30:00.511537 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xt75m\" (UniqueName: \"kubernetes.io/projected/7aba8941-2ebf-4bf6-94ab-a1b999b2366a-kube-api-access-xt75m\") pod \"collect-profiles-29484750-jzzch\" (UID: \"7aba8941-2ebf-4bf6-94ab-a1b999b2366a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484750-jzzch" Jan 22 12:30:00 crc kubenswrapper[5120]: I0122 12:30:00.523268 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484750-jzzch" Jan 22 12:30:00 crc kubenswrapper[5120]: W0122 12:30:00.785982 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3f073b85_c7cf_489a_8e89_7bf6bc9a2124.slice/crio-954bb1e516fab9f4b414aac9ddaa786139a9c43c4e1ab0c6093badfd49c7c8bc WatchSource:0}: Error finding container 954bb1e516fab9f4b414aac9ddaa786139a9c43c4e1ab0c6093badfd49c7c8bc: Status 404 returned error can't find the container with id 954bb1e516fab9f4b414aac9ddaa786139a9c43c4e1ab0c6093badfd49c7c8bc Jan 22 12:30:00 crc kubenswrapper[5120]: I0122 12:30:00.788557 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484750-sqt7t"] Jan 22 12:30:00 crc kubenswrapper[5120]: I0122 12:30:00.824403 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484750-jzzch"] Jan 22 12:30:00 crc kubenswrapper[5120]: W0122 12:30:00.827464 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7aba8941_2ebf_4bf6_94ab_a1b999b2366a.slice/crio-960d091e38ad7051f2aead273bce78e91011adf476eba8dfa0058d52b101cef3 WatchSource:0}: Error finding container 960d091e38ad7051f2aead273bce78e91011adf476eba8dfa0058d52b101cef3: Status 404 returned error can't find the container with id 960d091e38ad7051f2aead273bce78e91011adf476eba8dfa0058d52b101cef3 Jan 22 12:30:01 crc kubenswrapper[5120]: I0122 12:30:01.151129 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484750-sqt7t" event={"ID":"3f073b85-c7cf-489a-8e89-7bf6bc9a2124","Type":"ContainerStarted","Data":"954bb1e516fab9f4b414aac9ddaa786139a9c43c4e1ab0c6093badfd49c7c8bc"} Jan 22 12:30:01 crc kubenswrapper[5120]: I0122 12:30:01.153456 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484750-jzzch" event={"ID":"7aba8941-2ebf-4bf6-94ab-a1b999b2366a","Type":"ContainerStarted","Data":"68b73452fa019a556704c5a2b540f627bf58c904b25685813e5ed80c9863d57b"} Jan 22 12:30:01 crc kubenswrapper[5120]: I0122 12:30:01.153490 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484750-jzzch" event={"ID":"7aba8941-2ebf-4bf6-94ab-a1b999b2366a","Type":"ContainerStarted","Data":"960d091e38ad7051f2aead273bce78e91011adf476eba8dfa0058d52b101cef3"} Jan 22 12:30:01 crc kubenswrapper[5120]: I0122 12:30:01.181360 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29484750-jzzch" podStartSLOduration=1.181337171 podStartE2EDuration="1.181337171s" podCreationTimestamp="2026-01-22 12:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 12:30:01.179270571 +0000 UTC m=+2535.923218922" watchObservedRunningTime="2026-01-22 12:30:01.181337171 +0000 UTC m=+2535.925285662" Jan 22 12:30:02 crc kubenswrapper[5120]: I0122 12:30:02.176082 5120 generic.go:358] "Generic (PLEG): container finished" podID="7aba8941-2ebf-4bf6-94ab-a1b999b2366a" containerID="68b73452fa019a556704c5a2b540f627bf58c904b25685813e5ed80c9863d57b" exitCode=0 Jan 22 12:30:02 crc kubenswrapper[5120]: I0122 12:30:02.176734 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484750-jzzch" event={"ID":"7aba8941-2ebf-4bf6-94ab-a1b999b2366a","Type":"ContainerDied","Data":"68b73452fa019a556704c5a2b540f627bf58c904b25685813e5ed80c9863d57b"} Jan 22 12:30:03 crc kubenswrapper[5120]: I0122 12:30:03.189019 5120 generic.go:358] "Generic (PLEG): container finished" podID="3f073b85-c7cf-489a-8e89-7bf6bc9a2124" containerID="e252cf75f043a8d827ee19582fab16cdd6e6b640af539cb8d97f2f626b48055f" exitCode=0 Jan 22 12:30:03 crc kubenswrapper[5120]: I0122 12:30:03.189080 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484750-sqt7t" event={"ID":"3f073b85-c7cf-489a-8e89-7bf6bc9a2124","Type":"ContainerDied","Data":"e252cf75f043a8d827ee19582fab16cdd6e6b640af539cb8d97f2f626b48055f"} Jan 22 12:30:03 crc kubenswrapper[5120]: I0122 12:30:03.472773 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484750-jzzch" Jan 22 12:30:03 crc kubenswrapper[5120]: I0122 12:30:03.625206 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7aba8941-2ebf-4bf6-94ab-a1b999b2366a-secret-volume\") pod \"7aba8941-2ebf-4bf6-94ab-a1b999b2366a\" (UID: \"7aba8941-2ebf-4bf6-94ab-a1b999b2366a\") " Jan 22 12:30:03 crc kubenswrapper[5120]: I0122 12:30:03.626179 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xt75m\" (UniqueName: \"kubernetes.io/projected/7aba8941-2ebf-4bf6-94ab-a1b999b2366a-kube-api-access-xt75m\") pod \"7aba8941-2ebf-4bf6-94ab-a1b999b2366a\" (UID: \"7aba8941-2ebf-4bf6-94ab-a1b999b2366a\") " Jan 22 12:30:03 crc kubenswrapper[5120]: I0122 12:30:03.626243 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7aba8941-2ebf-4bf6-94ab-a1b999b2366a-config-volume\") pod \"7aba8941-2ebf-4bf6-94ab-a1b999b2366a\" (UID: \"7aba8941-2ebf-4bf6-94ab-a1b999b2366a\") " Jan 22 12:30:03 crc kubenswrapper[5120]: I0122 12:30:03.627332 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7aba8941-2ebf-4bf6-94ab-a1b999b2366a-config-volume" (OuterVolumeSpecName: "config-volume") pod "7aba8941-2ebf-4bf6-94ab-a1b999b2366a" (UID: "7aba8941-2ebf-4bf6-94ab-a1b999b2366a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:30:03 crc kubenswrapper[5120]: I0122 12:30:03.636567 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7aba8941-2ebf-4bf6-94ab-a1b999b2366a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7aba8941-2ebf-4bf6-94ab-a1b999b2366a" (UID: "7aba8941-2ebf-4bf6-94ab-a1b999b2366a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 12:30:03 crc kubenswrapper[5120]: I0122 12:30:03.636662 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7aba8941-2ebf-4bf6-94ab-a1b999b2366a-kube-api-access-xt75m" (OuterVolumeSpecName: "kube-api-access-xt75m") pod "7aba8941-2ebf-4bf6-94ab-a1b999b2366a" (UID: "7aba8941-2ebf-4bf6-94ab-a1b999b2366a"). InnerVolumeSpecName "kube-api-access-xt75m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:30:03 crc kubenswrapper[5120]: I0122 12:30:03.728273 5120 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7aba8941-2ebf-4bf6-94ab-a1b999b2366a-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 22 12:30:03 crc kubenswrapper[5120]: I0122 12:30:03.728342 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xt75m\" (UniqueName: \"kubernetes.io/projected/7aba8941-2ebf-4bf6-94ab-a1b999b2366a-kube-api-access-xt75m\") on node \"crc\" DevicePath \"\"" Jan 22 12:30:03 crc kubenswrapper[5120]: I0122 12:30:03.728352 5120 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7aba8941-2ebf-4bf6-94ab-a1b999b2366a-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 12:30:04 crc kubenswrapper[5120]: I0122 12:30:04.218877 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484750-jzzch" Jan 22 12:30:04 crc kubenswrapper[5120]: I0122 12:30:04.219404 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484750-jzzch" event={"ID":"7aba8941-2ebf-4bf6-94ab-a1b999b2366a","Type":"ContainerDied","Data":"960d091e38ad7051f2aead273bce78e91011adf476eba8dfa0058d52b101cef3"} Jan 22 12:30:04 crc kubenswrapper[5120]: I0122 12:30:04.219472 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="960d091e38ad7051f2aead273bce78e91011adf476eba8dfa0058d52b101cef3" Jan 22 12:30:04 crc kubenswrapper[5120]: I0122 12:30:04.265155 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484705-g489w"] Jan 22 12:30:04 crc kubenswrapper[5120]: I0122 12:30:04.271165 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484705-g489w"] Jan 22 12:30:04 crc kubenswrapper[5120]: I0122 12:30:04.502503 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484750-sqt7t" Jan 22 12:30:04 crc kubenswrapper[5120]: I0122 12:30:04.642851 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtjsf\" (UniqueName: \"kubernetes.io/projected/3f073b85-c7cf-489a-8e89-7bf6bc9a2124-kube-api-access-mtjsf\") pod \"3f073b85-c7cf-489a-8e89-7bf6bc9a2124\" (UID: \"3f073b85-c7cf-489a-8e89-7bf6bc9a2124\") " Jan 22 12:30:04 crc kubenswrapper[5120]: I0122 12:30:04.651564 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f073b85-c7cf-489a-8e89-7bf6bc9a2124-kube-api-access-mtjsf" (OuterVolumeSpecName: "kube-api-access-mtjsf") pod "3f073b85-c7cf-489a-8e89-7bf6bc9a2124" (UID: "3f073b85-c7cf-489a-8e89-7bf6bc9a2124"). InnerVolumeSpecName "kube-api-access-mtjsf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:30:04 crc kubenswrapper[5120]: I0122 12:30:04.744886 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mtjsf\" (UniqueName: \"kubernetes.io/projected/3f073b85-c7cf-489a-8e89-7bf6bc9a2124-kube-api-access-mtjsf\") on node \"crc\" DevicePath \"\"" Jan 22 12:30:05 crc kubenswrapper[5120]: I0122 12:30:05.231055 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484750-sqt7t" event={"ID":"3f073b85-c7cf-489a-8e89-7bf6bc9a2124","Type":"ContainerDied","Data":"954bb1e516fab9f4b414aac9ddaa786139a9c43c4e1ab0c6093badfd49c7c8bc"} Jan 22 12:30:05 crc kubenswrapper[5120]: I0122 12:30:05.231158 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="954bb1e516fab9f4b414aac9ddaa786139a9c43c4e1ab0c6093badfd49c7c8bc" Jan 22 12:30:05 crc kubenswrapper[5120]: I0122 12:30:05.231069 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484750-sqt7t" Jan 22 12:30:05 crc kubenswrapper[5120]: I0122 12:30:05.596211 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2667e960-0d1a-4c78-97ea-b1852f27ce17" path="/var/lib/kubelet/pods/2667e960-0d1a-4c78-97ea-b1852f27ce17/volumes" Jan 22 12:30:05 crc kubenswrapper[5120]: I0122 12:30:05.597573 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29484744-7g58z"] Jan 22 12:30:05 crc kubenswrapper[5120]: I0122 12:30:05.598771 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29484744-7g58z"] Jan 22 12:30:07 crc kubenswrapper[5120]: I0122 12:30:07.588193 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cde2753-8f27-404a-8fbc-d297e718b3b8" path="/var/lib/kubelet/pods/9cde2753-8f27-404a-8fbc-d297e718b3b8/volumes" Jan 22 12:30:09 crc kubenswrapper[5120]: I0122 12:30:09.574664 5120 scope.go:117] "RemoveContainer" containerID="f2643d6719d898899b7fe441e6374794306e1af141db7ee92ac8d42af384da07" Jan 22 12:30:09 crc kubenswrapper[5120]: E0122 12:30:09.575153 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:30:23 crc kubenswrapper[5120]: I0122 12:30:23.573377 5120 scope.go:117] "RemoveContainer" containerID="f2643d6719d898899b7fe441e6374794306e1af141db7ee92ac8d42af384da07" Jan 22 12:30:23 crc kubenswrapper[5120]: E0122 12:30:23.574862 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:30:35 crc kubenswrapper[5120]: I0122 12:30:35.595548 5120 scope.go:117] "RemoveContainer" containerID="f2643d6719d898899b7fe441e6374794306e1af141db7ee92ac8d42af384da07" Jan 22 12:30:35 crc kubenswrapper[5120]: E0122 12:30:35.596780 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:30:50 crc kubenswrapper[5120]: I0122 12:30:50.572281 5120 scope.go:117] "RemoveContainer" containerID="f2643d6719d898899b7fe441e6374794306e1af141db7ee92ac8d42af384da07" Jan 22 12:30:50 crc kubenswrapper[5120]: E0122 12:30:50.573202 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:31:02 crc kubenswrapper[5120]: I0122 12:31:02.786395 5120 scope.go:117] "RemoveContainer" containerID="639c5a6f329d80d432312ff72463fef5484bc1f4f6098a9e08e4b8cc0e600243" Jan 22 12:31:02 crc kubenswrapper[5120]: I0122 12:31:02.816871 5120 scope.go:117] "RemoveContainer" containerID="265c28387fd25a8a35e27895239a66ae8d41b785dc39bc594bbfbfd15a6f5f83" Jan 22 12:31:05 crc kubenswrapper[5120]: I0122 12:31:05.585853 5120 scope.go:117] "RemoveContainer" containerID="f2643d6719d898899b7fe441e6374794306e1af141db7ee92ac8d42af384da07" Jan 22 12:31:05 crc kubenswrapper[5120]: E0122 12:31:05.586332 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:31:16 crc kubenswrapper[5120]: I0122 12:31:16.572680 5120 scope.go:117] "RemoveContainer" containerID="f2643d6719d898899b7fe441e6374794306e1af141db7ee92ac8d42af384da07" Jan 22 12:31:16 crc kubenswrapper[5120]: E0122 12:31:16.574040 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:31:31 crc kubenswrapper[5120]: I0122 12:31:31.572911 5120 scope.go:117] "RemoveContainer" containerID="f2643d6719d898899b7fe441e6374794306e1af141db7ee92ac8d42af384da07" Jan 22 12:31:31 crc kubenswrapper[5120]: E0122 12:31:31.574315 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:31:44 crc kubenswrapper[5120]: I0122 12:31:44.642484 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pt2lk"] Jan 22 12:31:44 crc kubenswrapper[5120]: I0122 12:31:44.644461 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3f073b85-c7cf-489a-8e89-7bf6bc9a2124" containerName="oc" Jan 22 12:31:44 crc kubenswrapper[5120]: I0122 12:31:44.644484 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f073b85-c7cf-489a-8e89-7bf6bc9a2124" containerName="oc" Jan 22 12:31:44 crc kubenswrapper[5120]: I0122 12:31:44.644503 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7aba8941-2ebf-4bf6-94ab-a1b999b2366a" containerName="collect-profiles" Jan 22 12:31:44 crc kubenswrapper[5120]: I0122 12:31:44.644518 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="7aba8941-2ebf-4bf6-94ab-a1b999b2366a" containerName="collect-profiles" Jan 22 12:31:44 crc kubenswrapper[5120]: I0122 12:31:44.644792 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="3f073b85-c7cf-489a-8e89-7bf6bc9a2124" containerName="oc" Jan 22 12:31:44 crc kubenswrapper[5120]: I0122 12:31:44.644817 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="7aba8941-2ebf-4bf6-94ab-a1b999b2366a" containerName="collect-profiles" Jan 22 12:31:44 crc kubenswrapper[5120]: I0122 12:31:44.664336 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pt2lk" Jan 22 12:31:44 crc kubenswrapper[5120]: I0122 12:31:44.678747 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pt2lk"] Jan 22 12:31:44 crc kubenswrapper[5120]: I0122 12:31:44.801844 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nb7rf\" (UniqueName: \"kubernetes.io/projected/82866e94-add3-43ee-890e-d133e4f2c590-kube-api-access-nb7rf\") pod \"redhat-operators-pt2lk\" (UID: \"82866e94-add3-43ee-890e-d133e4f2c590\") " pod="openshift-marketplace/redhat-operators-pt2lk" Jan 22 12:31:44 crc kubenswrapper[5120]: I0122 12:31:44.802389 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82866e94-add3-43ee-890e-d133e4f2c590-catalog-content\") pod \"redhat-operators-pt2lk\" (UID: \"82866e94-add3-43ee-890e-d133e4f2c590\") " pod="openshift-marketplace/redhat-operators-pt2lk" Jan 22 12:31:44 crc kubenswrapper[5120]: I0122 12:31:44.802513 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82866e94-add3-43ee-890e-d133e4f2c590-utilities\") pod \"redhat-operators-pt2lk\" (UID: \"82866e94-add3-43ee-890e-d133e4f2c590\") " pod="openshift-marketplace/redhat-operators-pt2lk" Jan 22 12:31:44 crc kubenswrapper[5120]: I0122 12:31:44.904635 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82866e94-add3-43ee-890e-d133e4f2c590-catalog-content\") pod \"redhat-operators-pt2lk\" (UID: \"82866e94-add3-43ee-890e-d133e4f2c590\") " pod="openshift-marketplace/redhat-operators-pt2lk" Jan 22 12:31:44 crc kubenswrapper[5120]: I0122 12:31:44.904737 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82866e94-add3-43ee-890e-d133e4f2c590-utilities\") pod \"redhat-operators-pt2lk\" (UID: \"82866e94-add3-43ee-890e-d133e4f2c590\") " pod="openshift-marketplace/redhat-operators-pt2lk" Jan 22 12:31:44 crc kubenswrapper[5120]: I0122 12:31:44.904842 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nb7rf\" (UniqueName: \"kubernetes.io/projected/82866e94-add3-43ee-890e-d133e4f2c590-kube-api-access-nb7rf\") pod \"redhat-operators-pt2lk\" (UID: \"82866e94-add3-43ee-890e-d133e4f2c590\") " pod="openshift-marketplace/redhat-operators-pt2lk" Jan 22 12:31:44 crc kubenswrapper[5120]: I0122 12:31:44.906688 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82866e94-add3-43ee-890e-d133e4f2c590-catalog-content\") pod \"redhat-operators-pt2lk\" (UID: \"82866e94-add3-43ee-890e-d133e4f2c590\") " pod="openshift-marketplace/redhat-operators-pt2lk" Jan 22 12:31:44 crc kubenswrapper[5120]: I0122 12:31:44.907427 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82866e94-add3-43ee-890e-d133e4f2c590-utilities\") pod \"redhat-operators-pt2lk\" (UID: \"82866e94-add3-43ee-890e-d133e4f2c590\") " pod="openshift-marketplace/redhat-operators-pt2lk" Jan 22 12:31:44 crc kubenswrapper[5120]: I0122 12:31:44.935084 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nb7rf\" (UniqueName: \"kubernetes.io/projected/82866e94-add3-43ee-890e-d133e4f2c590-kube-api-access-nb7rf\") pod \"redhat-operators-pt2lk\" (UID: \"82866e94-add3-43ee-890e-d133e4f2c590\") " pod="openshift-marketplace/redhat-operators-pt2lk" Jan 22 12:31:44 crc kubenswrapper[5120]: I0122 12:31:44.997560 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pt2lk" Jan 22 12:31:45 crc kubenswrapper[5120]: I0122 12:31:45.263797 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pt2lk"] Jan 22 12:31:45 crc kubenswrapper[5120]: I0122 12:31:45.268891 5120 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 12:31:45 crc kubenswrapper[5120]: I0122 12:31:45.315023 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pt2lk" event={"ID":"82866e94-add3-43ee-890e-d133e4f2c590","Type":"ContainerStarted","Data":"e59ad32aa35941275f930ff7acca21c531282b3c771108f6369332b62762c5cc"} Jan 22 12:31:45 crc kubenswrapper[5120]: I0122 12:31:45.581778 5120 scope.go:117] "RemoveContainer" containerID="f2643d6719d898899b7fe441e6374794306e1af141db7ee92ac8d42af384da07" Jan 22 12:31:45 crc kubenswrapper[5120]: E0122 12:31:45.582454 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:31:46 crc kubenswrapper[5120]: I0122 12:31:46.331336 5120 generic.go:358] "Generic (PLEG): container finished" podID="82866e94-add3-43ee-890e-d133e4f2c590" containerID="c57db96e52652762295d886204315fc7dc4b7bc944c38313f364d060900a88b7" exitCode=0 Jan 22 12:31:46 crc kubenswrapper[5120]: I0122 12:31:46.331528 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pt2lk" event={"ID":"82866e94-add3-43ee-890e-d133e4f2c590","Type":"ContainerDied","Data":"c57db96e52652762295d886204315fc7dc4b7bc944c38313f364d060900a88b7"} Jan 22 12:31:47 crc kubenswrapper[5120]: I0122 12:31:47.344156 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pt2lk" event={"ID":"82866e94-add3-43ee-890e-d133e4f2c590","Type":"ContainerStarted","Data":"32debb225fe995d716ced448b4d9c613fd405cba10932dc52bcfc0744404f5d1"} Jan 22 12:31:48 crc kubenswrapper[5120]: I0122 12:31:48.359181 5120 generic.go:358] "Generic (PLEG): container finished" podID="82866e94-add3-43ee-890e-d133e4f2c590" containerID="32debb225fe995d716ced448b4d9c613fd405cba10932dc52bcfc0744404f5d1" exitCode=0 Jan 22 12:31:48 crc kubenswrapper[5120]: I0122 12:31:48.359875 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pt2lk" event={"ID":"82866e94-add3-43ee-890e-d133e4f2c590","Type":"ContainerDied","Data":"32debb225fe995d716ced448b4d9c613fd405cba10932dc52bcfc0744404f5d1"} Jan 22 12:31:49 crc kubenswrapper[5120]: I0122 12:31:49.371209 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pt2lk" event={"ID":"82866e94-add3-43ee-890e-d133e4f2c590","Type":"ContainerStarted","Data":"69957a67c75d05747835227cfeb8100fff5139cc38f2b6df1e28a263f7b26fc4"} Jan 22 12:31:49 crc kubenswrapper[5120]: I0122 12:31:49.398639 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pt2lk" podStartSLOduration=4.690976065 podStartE2EDuration="5.398611713s" podCreationTimestamp="2026-01-22 12:31:44 +0000 UTC" firstStartedPulling="2026-01-22 12:31:46.334116581 +0000 UTC m=+2641.078064962" lastFinishedPulling="2026-01-22 12:31:47.041752269 +0000 UTC m=+2641.785700610" observedRunningTime="2026-01-22 12:31:49.388562642 +0000 UTC m=+2644.132511023" watchObservedRunningTime="2026-01-22 12:31:49.398611713 +0000 UTC m=+2644.142560074" Jan 22 12:31:54 crc kubenswrapper[5120]: I0122 12:31:54.010227 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7qptq"] Jan 22 12:31:54 crc kubenswrapper[5120]: I0122 12:31:54.794178 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7qptq"] Jan 22 12:31:54 crc kubenswrapper[5120]: I0122 12:31:54.794617 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7qptq" Jan 22 12:31:54 crc kubenswrapper[5120]: I0122 12:31:54.872342 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad7e51cc-e89a-4bed-b500-3b766d041fd7-catalog-content\") pod \"certified-operators-7qptq\" (UID: \"ad7e51cc-e89a-4bed-b500-3b766d041fd7\") " pod="openshift-marketplace/certified-operators-7qptq" Jan 22 12:31:54 crc kubenswrapper[5120]: I0122 12:31:54.872540 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad7e51cc-e89a-4bed-b500-3b766d041fd7-utilities\") pod \"certified-operators-7qptq\" (UID: \"ad7e51cc-e89a-4bed-b500-3b766d041fd7\") " pod="openshift-marketplace/certified-operators-7qptq" Jan 22 12:31:54 crc kubenswrapper[5120]: I0122 12:31:54.872817 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8qbw\" (UniqueName: \"kubernetes.io/projected/ad7e51cc-e89a-4bed-b500-3b766d041fd7-kube-api-access-p8qbw\") pod \"certified-operators-7qptq\" (UID: \"ad7e51cc-e89a-4bed-b500-3b766d041fd7\") " pod="openshift-marketplace/certified-operators-7qptq" Jan 22 12:31:54 crc kubenswrapper[5120]: I0122 12:31:54.974448 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p8qbw\" (UniqueName: \"kubernetes.io/projected/ad7e51cc-e89a-4bed-b500-3b766d041fd7-kube-api-access-p8qbw\") pod \"certified-operators-7qptq\" (UID: \"ad7e51cc-e89a-4bed-b500-3b766d041fd7\") " pod="openshift-marketplace/certified-operators-7qptq" Jan 22 12:31:54 crc kubenswrapper[5120]: I0122 12:31:54.974591 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad7e51cc-e89a-4bed-b500-3b766d041fd7-catalog-content\") pod \"certified-operators-7qptq\" (UID: \"ad7e51cc-e89a-4bed-b500-3b766d041fd7\") " pod="openshift-marketplace/certified-operators-7qptq" Jan 22 12:31:54 crc kubenswrapper[5120]: I0122 12:31:54.974680 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad7e51cc-e89a-4bed-b500-3b766d041fd7-utilities\") pod \"certified-operators-7qptq\" (UID: \"ad7e51cc-e89a-4bed-b500-3b766d041fd7\") " pod="openshift-marketplace/certified-operators-7qptq" Jan 22 12:31:54 crc kubenswrapper[5120]: I0122 12:31:54.977229 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad7e51cc-e89a-4bed-b500-3b766d041fd7-utilities\") pod \"certified-operators-7qptq\" (UID: \"ad7e51cc-e89a-4bed-b500-3b766d041fd7\") " pod="openshift-marketplace/certified-operators-7qptq" Jan 22 12:31:54 crc kubenswrapper[5120]: I0122 12:31:54.977537 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad7e51cc-e89a-4bed-b500-3b766d041fd7-catalog-content\") pod \"certified-operators-7qptq\" (UID: \"ad7e51cc-e89a-4bed-b500-3b766d041fd7\") " pod="openshift-marketplace/certified-operators-7qptq" Jan 22 12:31:54 crc kubenswrapper[5120]: I0122 12:31:54.999536 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pt2lk" Jan 22 12:31:54 crc kubenswrapper[5120]: I0122 12:31:54.999717 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-pt2lk" Jan 22 12:31:55 crc kubenswrapper[5120]: I0122 12:31:55.019190 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8qbw\" (UniqueName: \"kubernetes.io/projected/ad7e51cc-e89a-4bed-b500-3b766d041fd7-kube-api-access-p8qbw\") pod \"certified-operators-7qptq\" (UID: \"ad7e51cc-e89a-4bed-b500-3b766d041fd7\") " pod="openshift-marketplace/certified-operators-7qptq" Jan 22 12:31:55 crc kubenswrapper[5120]: I0122 12:31:55.084829 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pt2lk" Jan 22 12:31:55 crc kubenswrapper[5120]: I0122 12:31:55.134843 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7qptq" Jan 22 12:31:55 crc kubenswrapper[5120]: I0122 12:31:55.488008 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pt2lk" Jan 22 12:31:55 crc kubenswrapper[5120]: I0122 12:31:55.505968 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7qptq"] Jan 22 12:31:55 crc kubenswrapper[5120]: W0122 12:31:55.518582 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad7e51cc_e89a_4bed_b500_3b766d041fd7.slice/crio-0641f5f4426774a5da4d88ad3c9e9c1a6f008cf6829b9ca41897077c4da4f014 WatchSource:0}: Error finding container 0641f5f4426774a5da4d88ad3c9e9c1a6f008cf6829b9ca41897077c4da4f014: Status 404 returned error can't find the container with id 0641f5f4426774a5da4d88ad3c9e9c1a6f008cf6829b9ca41897077c4da4f014 Jan 22 12:31:56 crc kubenswrapper[5120]: I0122 12:31:56.435822 5120 generic.go:358] "Generic (PLEG): container finished" podID="ad7e51cc-e89a-4bed-b500-3b766d041fd7" containerID="4f8545fffde05edbe270181f69222f0db3f0b8b5dcc136e595275eb8016ac851" exitCode=0 Jan 22 12:31:56 crc kubenswrapper[5120]: I0122 12:31:56.436017 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7qptq" event={"ID":"ad7e51cc-e89a-4bed-b500-3b766d041fd7","Type":"ContainerDied","Data":"4f8545fffde05edbe270181f69222f0db3f0b8b5dcc136e595275eb8016ac851"} Jan 22 12:31:56 crc kubenswrapper[5120]: I0122 12:31:56.436727 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7qptq" event={"ID":"ad7e51cc-e89a-4bed-b500-3b766d041fd7","Type":"ContainerStarted","Data":"0641f5f4426774a5da4d88ad3c9e9c1a6f008cf6829b9ca41897077c4da4f014"} Jan 22 12:31:57 crc kubenswrapper[5120]: I0122 12:31:57.437880 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pt2lk"] Jan 22 12:31:57 crc kubenswrapper[5120]: I0122 12:31:57.456358 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pt2lk" podUID="82866e94-add3-43ee-890e-d133e4f2c590" containerName="registry-server" containerID="cri-o://69957a67c75d05747835227cfeb8100fff5139cc38f2b6df1e28a263f7b26fc4" gracePeriod=2 Jan 22 12:31:57 crc kubenswrapper[5120]: I0122 12:31:57.572733 5120 scope.go:117] "RemoveContainer" containerID="f2643d6719d898899b7fe441e6374794306e1af141db7ee92ac8d42af384da07" Jan 22 12:31:57 crc kubenswrapper[5120]: E0122 12:31:57.573173 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:31:57 crc kubenswrapper[5120]: I0122 12:31:57.876435 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pt2lk" Jan 22 12:31:57 crc kubenswrapper[5120]: I0122 12:31:57.945892 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82866e94-add3-43ee-890e-d133e4f2c590-catalog-content\") pod \"82866e94-add3-43ee-890e-d133e4f2c590\" (UID: \"82866e94-add3-43ee-890e-d133e4f2c590\") " Jan 22 12:31:57 crc kubenswrapper[5120]: I0122 12:31:57.946495 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82866e94-add3-43ee-890e-d133e4f2c590-utilities\") pod \"82866e94-add3-43ee-890e-d133e4f2c590\" (UID: \"82866e94-add3-43ee-890e-d133e4f2c590\") " Jan 22 12:31:57 crc kubenswrapper[5120]: I0122 12:31:57.946602 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nb7rf\" (UniqueName: \"kubernetes.io/projected/82866e94-add3-43ee-890e-d133e4f2c590-kube-api-access-nb7rf\") pod \"82866e94-add3-43ee-890e-d133e4f2c590\" (UID: \"82866e94-add3-43ee-890e-d133e4f2c590\") " Jan 22 12:31:57 crc kubenswrapper[5120]: I0122 12:31:57.947531 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82866e94-add3-43ee-890e-d133e4f2c590-utilities" (OuterVolumeSpecName: "utilities") pod "82866e94-add3-43ee-890e-d133e4f2c590" (UID: "82866e94-add3-43ee-890e-d133e4f2c590"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:31:57 crc kubenswrapper[5120]: I0122 12:31:57.955202 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82866e94-add3-43ee-890e-d133e4f2c590-kube-api-access-nb7rf" (OuterVolumeSpecName: "kube-api-access-nb7rf") pod "82866e94-add3-43ee-890e-d133e4f2c590" (UID: "82866e94-add3-43ee-890e-d133e4f2c590"). InnerVolumeSpecName "kube-api-access-nb7rf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:31:58 crc kubenswrapper[5120]: I0122 12:31:58.049503 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82866e94-add3-43ee-890e-d133e4f2c590-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 12:31:58 crc kubenswrapper[5120]: I0122 12:31:58.049550 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nb7rf\" (UniqueName: \"kubernetes.io/projected/82866e94-add3-43ee-890e-d133e4f2c590-kube-api-access-nb7rf\") on node \"crc\" DevicePath \"\"" Jan 22 12:31:58 crc kubenswrapper[5120]: I0122 12:31:58.469357 5120 generic.go:358] "Generic (PLEG): container finished" podID="82866e94-add3-43ee-890e-d133e4f2c590" containerID="69957a67c75d05747835227cfeb8100fff5139cc38f2b6df1e28a263f7b26fc4" exitCode=0 Jan 22 12:31:58 crc kubenswrapper[5120]: I0122 12:31:58.469504 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pt2lk" event={"ID":"82866e94-add3-43ee-890e-d133e4f2c590","Type":"ContainerDied","Data":"69957a67c75d05747835227cfeb8100fff5139cc38f2b6df1e28a263f7b26fc4"} Jan 22 12:31:58 crc kubenswrapper[5120]: I0122 12:31:58.469538 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pt2lk" event={"ID":"82866e94-add3-43ee-890e-d133e4f2c590","Type":"ContainerDied","Data":"e59ad32aa35941275f930ff7acca21c531282b3c771108f6369332b62762c5cc"} Jan 22 12:31:58 crc kubenswrapper[5120]: I0122 12:31:58.469558 5120 scope.go:117] "RemoveContainer" containerID="69957a67c75d05747835227cfeb8100fff5139cc38f2b6df1e28a263f7b26fc4" Jan 22 12:31:58 crc kubenswrapper[5120]: I0122 12:31:58.469831 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pt2lk" Jan 22 12:31:58 crc kubenswrapper[5120]: I0122 12:31:58.473477 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7qptq" event={"ID":"ad7e51cc-e89a-4bed-b500-3b766d041fd7","Type":"ContainerStarted","Data":"134af30ce92c4197ef4b359c4bc8c40b396cd3d79ca40996dd5797f8eff9af21"} Jan 22 12:31:58 crc kubenswrapper[5120]: I0122 12:31:58.509495 5120 scope.go:117] "RemoveContainer" containerID="32debb225fe995d716ced448b4d9c613fd405cba10932dc52bcfc0744404f5d1" Jan 22 12:31:58 crc kubenswrapper[5120]: I0122 12:31:58.551355 5120 scope.go:117] "RemoveContainer" containerID="c57db96e52652762295d886204315fc7dc4b7bc944c38313f364d060900a88b7" Jan 22 12:31:58 crc kubenswrapper[5120]: I0122 12:31:58.600904 5120 scope.go:117] "RemoveContainer" containerID="69957a67c75d05747835227cfeb8100fff5139cc38f2b6df1e28a263f7b26fc4" Jan 22 12:31:58 crc kubenswrapper[5120]: E0122 12:31:58.601399 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69957a67c75d05747835227cfeb8100fff5139cc38f2b6df1e28a263f7b26fc4\": container with ID starting with 69957a67c75d05747835227cfeb8100fff5139cc38f2b6df1e28a263f7b26fc4 not found: ID does not exist" containerID="69957a67c75d05747835227cfeb8100fff5139cc38f2b6df1e28a263f7b26fc4" Jan 22 12:31:58 crc kubenswrapper[5120]: I0122 12:31:58.601430 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69957a67c75d05747835227cfeb8100fff5139cc38f2b6df1e28a263f7b26fc4"} err="failed to get container status \"69957a67c75d05747835227cfeb8100fff5139cc38f2b6df1e28a263f7b26fc4\": rpc error: code = NotFound desc = could not find container \"69957a67c75d05747835227cfeb8100fff5139cc38f2b6df1e28a263f7b26fc4\": container with ID starting with 69957a67c75d05747835227cfeb8100fff5139cc38f2b6df1e28a263f7b26fc4 not found: ID does not exist" Jan 22 12:31:58 crc kubenswrapper[5120]: I0122 12:31:58.601450 5120 scope.go:117] "RemoveContainer" containerID="32debb225fe995d716ced448b4d9c613fd405cba10932dc52bcfc0744404f5d1" Jan 22 12:31:58 crc kubenswrapper[5120]: E0122 12:31:58.601832 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32debb225fe995d716ced448b4d9c613fd405cba10932dc52bcfc0744404f5d1\": container with ID starting with 32debb225fe995d716ced448b4d9c613fd405cba10932dc52bcfc0744404f5d1 not found: ID does not exist" containerID="32debb225fe995d716ced448b4d9c613fd405cba10932dc52bcfc0744404f5d1" Jan 22 12:31:58 crc kubenswrapper[5120]: I0122 12:31:58.601858 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32debb225fe995d716ced448b4d9c613fd405cba10932dc52bcfc0744404f5d1"} err="failed to get container status \"32debb225fe995d716ced448b4d9c613fd405cba10932dc52bcfc0744404f5d1\": rpc error: code = NotFound desc = could not find container \"32debb225fe995d716ced448b4d9c613fd405cba10932dc52bcfc0744404f5d1\": container with ID starting with 32debb225fe995d716ced448b4d9c613fd405cba10932dc52bcfc0744404f5d1 not found: ID does not exist" Jan 22 12:31:58 crc kubenswrapper[5120]: I0122 12:31:58.601873 5120 scope.go:117] "RemoveContainer" containerID="c57db96e52652762295d886204315fc7dc4b7bc944c38313f364d060900a88b7" Jan 22 12:31:58 crc kubenswrapper[5120]: E0122 12:31:58.602176 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c57db96e52652762295d886204315fc7dc4b7bc944c38313f364d060900a88b7\": container with ID starting with c57db96e52652762295d886204315fc7dc4b7bc944c38313f364d060900a88b7 not found: ID does not exist" containerID="c57db96e52652762295d886204315fc7dc4b7bc944c38313f364d060900a88b7" Jan 22 12:31:58 crc kubenswrapper[5120]: I0122 12:31:58.602203 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c57db96e52652762295d886204315fc7dc4b7bc944c38313f364d060900a88b7"} err="failed to get container status \"c57db96e52652762295d886204315fc7dc4b7bc944c38313f364d060900a88b7\": rpc error: code = NotFound desc = could not find container \"c57db96e52652762295d886204315fc7dc4b7bc944c38313f364d060900a88b7\": container with ID starting with c57db96e52652762295d886204315fc7dc4b7bc944c38313f364d060900a88b7 not found: ID does not exist" Jan 22 12:31:59 crc kubenswrapper[5120]: I0122 12:31:59.492326 5120 generic.go:358] "Generic (PLEG): container finished" podID="ad7e51cc-e89a-4bed-b500-3b766d041fd7" containerID="134af30ce92c4197ef4b359c4bc8c40b396cd3d79ca40996dd5797f8eff9af21" exitCode=0 Jan 22 12:31:59 crc kubenswrapper[5120]: I0122 12:31:59.492506 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7qptq" event={"ID":"ad7e51cc-e89a-4bed-b500-3b766d041fd7","Type":"ContainerDied","Data":"134af30ce92c4197ef4b359c4bc8c40b396cd3d79ca40996dd5797f8eff9af21"} Jan 22 12:31:59 crc kubenswrapper[5120]: I0122 12:31:59.938235 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82866e94-add3-43ee-890e-d133e4f2c590-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "82866e94-add3-43ee-890e-d133e4f2c590" (UID: "82866e94-add3-43ee-890e-d133e4f2c590"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:31:59 crc kubenswrapper[5120]: I0122 12:31:59.979449 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82866e94-add3-43ee-890e-d133e4f2c590-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 12:32:00 crc kubenswrapper[5120]: I0122 12:32:00.072727 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pt2lk"] Jan 22 12:32:00 crc kubenswrapper[5120]: I0122 12:32:00.077991 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pt2lk"] Jan 22 12:32:00 crc kubenswrapper[5120]: I0122 12:32:00.176842 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29484752-v5hcs"] Jan 22 12:32:00 crc kubenswrapper[5120]: I0122 12:32:00.178068 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="82866e94-add3-43ee-890e-d133e4f2c590" containerName="extract-utilities" Jan 22 12:32:00 crc kubenswrapper[5120]: I0122 12:32:00.178096 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="82866e94-add3-43ee-890e-d133e4f2c590" containerName="extract-utilities" Jan 22 12:32:00 crc kubenswrapper[5120]: I0122 12:32:00.178125 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="82866e94-add3-43ee-890e-d133e4f2c590" containerName="extract-content" Jan 22 12:32:00 crc kubenswrapper[5120]: I0122 12:32:00.178132 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="82866e94-add3-43ee-890e-d133e4f2c590" containerName="extract-content" Jan 22 12:32:00 crc kubenswrapper[5120]: I0122 12:32:00.178148 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="82866e94-add3-43ee-890e-d133e4f2c590" containerName="registry-server" Jan 22 12:32:00 crc kubenswrapper[5120]: I0122 12:32:00.178160 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="82866e94-add3-43ee-890e-d133e4f2c590" containerName="registry-server" Jan 22 12:32:00 crc kubenswrapper[5120]: I0122 12:32:00.178323 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="82866e94-add3-43ee-890e-d133e4f2c590" containerName="registry-server" Jan 22 12:32:00 crc kubenswrapper[5120]: I0122 12:32:00.183248 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484752-v5hcs"] Jan 22 12:32:00 crc kubenswrapper[5120]: I0122 12:32:00.183374 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484752-v5hcs" Jan 22 12:32:00 crc kubenswrapper[5120]: I0122 12:32:00.187078 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g2chw\"" Jan 22 12:32:00 crc kubenswrapper[5120]: I0122 12:32:00.187482 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 22 12:32:00 crc kubenswrapper[5120]: I0122 12:32:00.188330 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 22 12:32:00 crc kubenswrapper[5120]: I0122 12:32:00.283999 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwbsh\" (UniqueName: \"kubernetes.io/projected/7943991c-5c7d-4a50-80ac-42d7eb0f624f-kube-api-access-cwbsh\") pod \"auto-csr-approver-29484752-v5hcs\" (UID: \"7943991c-5c7d-4a50-80ac-42d7eb0f624f\") " pod="openshift-infra/auto-csr-approver-29484752-v5hcs" Jan 22 12:32:00 crc kubenswrapper[5120]: I0122 12:32:00.385201 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cwbsh\" (UniqueName: \"kubernetes.io/projected/7943991c-5c7d-4a50-80ac-42d7eb0f624f-kube-api-access-cwbsh\") pod \"auto-csr-approver-29484752-v5hcs\" (UID: \"7943991c-5c7d-4a50-80ac-42d7eb0f624f\") " pod="openshift-infra/auto-csr-approver-29484752-v5hcs" Jan 22 12:32:00 crc kubenswrapper[5120]: I0122 12:32:00.425552 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwbsh\" (UniqueName: \"kubernetes.io/projected/7943991c-5c7d-4a50-80ac-42d7eb0f624f-kube-api-access-cwbsh\") pod \"auto-csr-approver-29484752-v5hcs\" (UID: \"7943991c-5c7d-4a50-80ac-42d7eb0f624f\") " pod="openshift-infra/auto-csr-approver-29484752-v5hcs" Jan 22 12:32:00 crc kubenswrapper[5120]: I0122 12:32:00.503220 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7qptq" event={"ID":"ad7e51cc-e89a-4bed-b500-3b766d041fd7","Type":"ContainerStarted","Data":"f8456a4c00892b86634e946891f5b1461e175ce5fd4694c98114475e50788591"} Jan 22 12:32:00 crc kubenswrapper[5120]: I0122 12:32:00.508451 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484752-v5hcs" Jan 22 12:32:00 crc kubenswrapper[5120]: I0122 12:32:00.532472 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7qptq" podStartSLOduration=6.6139342469999995 podStartE2EDuration="7.532449603s" podCreationTimestamp="2026-01-22 12:31:53 +0000 UTC" firstStartedPulling="2026-01-22 12:31:56.437029494 +0000 UTC m=+2651.180977845" lastFinishedPulling="2026-01-22 12:31:57.35554485 +0000 UTC m=+2652.099493201" observedRunningTime="2026-01-22 12:32:00.5273594 +0000 UTC m=+2655.271307781" watchObservedRunningTime="2026-01-22 12:32:00.532449603 +0000 UTC m=+2655.276397974" Jan 22 12:32:01 crc kubenswrapper[5120]: I0122 12:32:01.000480 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484752-v5hcs"] Jan 22 12:32:01 crc kubenswrapper[5120]: W0122 12:32:01.010187 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7943991c_5c7d_4a50_80ac_42d7eb0f624f.slice/crio-baf2c1ea70012e416695caee6a6a75d866e409974794706c6d607ebe771cb862 WatchSource:0}: Error finding container baf2c1ea70012e416695caee6a6a75d866e409974794706c6d607ebe771cb862: Status 404 returned error can't find the container with id baf2c1ea70012e416695caee6a6a75d866e409974794706c6d607ebe771cb862 Jan 22 12:32:01 crc kubenswrapper[5120]: I0122 12:32:01.514465 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484752-v5hcs" event={"ID":"7943991c-5c7d-4a50-80ac-42d7eb0f624f","Type":"ContainerStarted","Data":"baf2c1ea70012e416695caee6a6a75d866e409974794706c6d607ebe771cb862"} Jan 22 12:32:01 crc kubenswrapper[5120]: I0122 12:32:01.594182 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82866e94-add3-43ee-890e-d133e4f2c590" path="/var/lib/kubelet/pods/82866e94-add3-43ee-890e-d133e4f2c590/volumes" Jan 22 12:32:02 crc kubenswrapper[5120]: I0122 12:32:02.528240 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484752-v5hcs" event={"ID":"7943991c-5c7d-4a50-80ac-42d7eb0f624f","Type":"ContainerStarted","Data":"2871f40e4381a68e2190c46528c45a6f62b9393512cbac4263f64ed579203e6a"} Jan 22 12:32:02 crc kubenswrapper[5120]: I0122 12:32:02.547790 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29484752-v5hcs" podStartSLOduration=1.5913614630000001 podStartE2EDuration="2.547771749s" podCreationTimestamp="2026-01-22 12:32:00 +0000 UTC" firstStartedPulling="2026-01-22 12:32:01.012118272 +0000 UTC m=+2655.756066633" lastFinishedPulling="2026-01-22 12:32:01.968528578 +0000 UTC m=+2656.712476919" observedRunningTime="2026-01-22 12:32:02.542509603 +0000 UTC m=+2657.286457944" watchObservedRunningTime="2026-01-22 12:32:02.547771749 +0000 UTC m=+2657.291720090" Jan 22 12:32:03 crc kubenswrapper[5120]: I0122 12:32:03.542120 5120 generic.go:358] "Generic (PLEG): container finished" podID="7943991c-5c7d-4a50-80ac-42d7eb0f624f" containerID="2871f40e4381a68e2190c46528c45a6f62b9393512cbac4263f64ed579203e6a" exitCode=0 Jan 22 12:32:03 crc kubenswrapper[5120]: I0122 12:32:03.542376 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484752-v5hcs" event={"ID":"7943991c-5c7d-4a50-80ac-42d7eb0f624f","Type":"ContainerDied","Data":"2871f40e4381a68e2190c46528c45a6f62b9393512cbac4263f64ed579203e6a"} Jan 22 12:32:04 crc kubenswrapper[5120]: I0122 12:32:04.865156 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484752-v5hcs" Jan 22 12:32:04 crc kubenswrapper[5120]: I0122 12:32:04.972495 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwbsh\" (UniqueName: \"kubernetes.io/projected/7943991c-5c7d-4a50-80ac-42d7eb0f624f-kube-api-access-cwbsh\") pod \"7943991c-5c7d-4a50-80ac-42d7eb0f624f\" (UID: \"7943991c-5c7d-4a50-80ac-42d7eb0f624f\") " Jan 22 12:32:04 crc kubenswrapper[5120]: I0122 12:32:04.998385 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7943991c-5c7d-4a50-80ac-42d7eb0f624f-kube-api-access-cwbsh" (OuterVolumeSpecName: "kube-api-access-cwbsh") pod "7943991c-5c7d-4a50-80ac-42d7eb0f624f" (UID: "7943991c-5c7d-4a50-80ac-42d7eb0f624f"). InnerVolumeSpecName "kube-api-access-cwbsh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:32:05 crc kubenswrapper[5120]: I0122 12:32:05.073801 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cwbsh\" (UniqueName: \"kubernetes.io/projected/7943991c-5c7d-4a50-80ac-42d7eb0f624f-kube-api-access-cwbsh\") on node \"crc\" DevicePath \"\"" Jan 22 12:32:05 crc kubenswrapper[5120]: I0122 12:32:05.135744 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7qptq" Jan 22 12:32:05 crc kubenswrapper[5120]: I0122 12:32:05.135811 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-7qptq" Jan 22 12:32:05 crc kubenswrapper[5120]: I0122 12:32:05.205913 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7qptq" Jan 22 12:32:05 crc kubenswrapper[5120]: I0122 12:32:05.590479 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484752-v5hcs" Jan 22 12:32:05 crc kubenswrapper[5120]: I0122 12:32:05.600465 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484752-v5hcs" event={"ID":"7943991c-5c7d-4a50-80ac-42d7eb0f624f","Type":"ContainerDied","Data":"baf2c1ea70012e416695caee6a6a75d866e409974794706c6d607ebe771cb862"} Jan 22 12:32:05 crc kubenswrapper[5120]: I0122 12:32:05.600613 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="baf2c1ea70012e416695caee6a6a75d866e409974794706c6d607ebe771cb862" Jan 22 12:32:05 crc kubenswrapper[5120]: I0122 12:32:05.610980 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29484746-xsmfp"] Jan 22 12:32:05 crc kubenswrapper[5120]: I0122 12:32:05.618400 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29484746-xsmfp"] Jan 22 12:32:05 crc kubenswrapper[5120]: I0122 12:32:05.661899 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7qptq" Jan 22 12:32:07 crc kubenswrapper[5120]: I0122 12:32:07.405061 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7qptq"] Jan 22 12:32:07 crc kubenswrapper[5120]: I0122 12:32:07.581869 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cba6b62-807f-4e37-b350-bc4eef13747b" path="/var/lib/kubelet/pods/7cba6b62-807f-4e37-b350-bc4eef13747b/volumes" Jan 22 12:32:07 crc kubenswrapper[5120]: I0122 12:32:07.612626 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7qptq" podUID="ad7e51cc-e89a-4bed-b500-3b766d041fd7" containerName="registry-server" containerID="cri-o://f8456a4c00892b86634e946891f5b1461e175ce5fd4694c98114475e50788591" gracePeriod=2 Jan 22 12:32:08 crc kubenswrapper[5120]: I0122 12:32:08.046272 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7qptq" Jan 22 12:32:08 crc kubenswrapper[5120]: I0122 12:32:08.236135 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad7e51cc-e89a-4bed-b500-3b766d041fd7-catalog-content\") pod \"ad7e51cc-e89a-4bed-b500-3b766d041fd7\" (UID: \"ad7e51cc-e89a-4bed-b500-3b766d041fd7\") " Jan 22 12:32:08 crc kubenswrapper[5120]: I0122 12:32:08.236301 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8qbw\" (UniqueName: \"kubernetes.io/projected/ad7e51cc-e89a-4bed-b500-3b766d041fd7-kube-api-access-p8qbw\") pod \"ad7e51cc-e89a-4bed-b500-3b766d041fd7\" (UID: \"ad7e51cc-e89a-4bed-b500-3b766d041fd7\") " Jan 22 12:32:08 crc kubenswrapper[5120]: I0122 12:32:08.236347 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad7e51cc-e89a-4bed-b500-3b766d041fd7-utilities\") pod \"ad7e51cc-e89a-4bed-b500-3b766d041fd7\" (UID: \"ad7e51cc-e89a-4bed-b500-3b766d041fd7\") " Jan 22 12:32:08 crc kubenswrapper[5120]: I0122 12:32:08.240208 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad7e51cc-e89a-4bed-b500-3b766d041fd7-utilities" (OuterVolumeSpecName: "utilities") pod "ad7e51cc-e89a-4bed-b500-3b766d041fd7" (UID: "ad7e51cc-e89a-4bed-b500-3b766d041fd7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:32:08 crc kubenswrapper[5120]: I0122 12:32:08.246794 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad7e51cc-e89a-4bed-b500-3b766d041fd7-kube-api-access-p8qbw" (OuterVolumeSpecName: "kube-api-access-p8qbw") pod "ad7e51cc-e89a-4bed-b500-3b766d041fd7" (UID: "ad7e51cc-e89a-4bed-b500-3b766d041fd7"). InnerVolumeSpecName "kube-api-access-p8qbw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:32:08 crc kubenswrapper[5120]: I0122 12:32:08.281872 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad7e51cc-e89a-4bed-b500-3b766d041fd7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ad7e51cc-e89a-4bed-b500-3b766d041fd7" (UID: "ad7e51cc-e89a-4bed-b500-3b766d041fd7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:32:08 crc kubenswrapper[5120]: I0122 12:32:08.338354 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-p8qbw\" (UniqueName: \"kubernetes.io/projected/ad7e51cc-e89a-4bed-b500-3b766d041fd7-kube-api-access-p8qbw\") on node \"crc\" DevicePath \"\"" Jan 22 12:32:08 crc kubenswrapper[5120]: I0122 12:32:08.338393 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad7e51cc-e89a-4bed-b500-3b766d041fd7-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 12:32:08 crc kubenswrapper[5120]: I0122 12:32:08.338404 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad7e51cc-e89a-4bed-b500-3b766d041fd7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 12:32:08 crc kubenswrapper[5120]: I0122 12:32:08.639048 5120 generic.go:358] "Generic (PLEG): container finished" podID="ad7e51cc-e89a-4bed-b500-3b766d041fd7" containerID="f8456a4c00892b86634e946891f5b1461e175ce5fd4694c98114475e50788591" exitCode=0 Jan 22 12:32:08 crc kubenswrapper[5120]: I0122 12:32:08.639629 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7qptq" event={"ID":"ad7e51cc-e89a-4bed-b500-3b766d041fd7","Type":"ContainerDied","Data":"f8456a4c00892b86634e946891f5b1461e175ce5fd4694c98114475e50788591"} Jan 22 12:32:08 crc kubenswrapper[5120]: I0122 12:32:08.639674 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7qptq" event={"ID":"ad7e51cc-e89a-4bed-b500-3b766d041fd7","Type":"ContainerDied","Data":"0641f5f4426774a5da4d88ad3c9e9c1a6f008cf6829b9ca41897077c4da4f014"} Jan 22 12:32:08 crc kubenswrapper[5120]: I0122 12:32:08.639706 5120 scope.go:117] "RemoveContainer" containerID="f8456a4c00892b86634e946891f5b1461e175ce5fd4694c98114475e50788591" Jan 22 12:32:08 crc kubenswrapper[5120]: I0122 12:32:08.639917 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7qptq" Jan 22 12:32:08 crc kubenswrapper[5120]: I0122 12:32:08.671610 5120 scope.go:117] "RemoveContainer" containerID="134af30ce92c4197ef4b359c4bc8c40b396cd3d79ca40996dd5797f8eff9af21" Jan 22 12:32:08 crc kubenswrapper[5120]: I0122 12:32:08.707099 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7qptq"] Jan 22 12:32:08 crc kubenswrapper[5120]: I0122 12:32:08.714525 5120 scope.go:117] "RemoveContainer" containerID="4f8545fffde05edbe270181f69222f0db3f0b8b5dcc136e595275eb8016ac851" Jan 22 12:32:08 crc kubenswrapper[5120]: I0122 12:32:08.720137 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7qptq"] Jan 22 12:32:08 crc kubenswrapper[5120]: I0122 12:32:08.746923 5120 scope.go:117] "RemoveContainer" containerID="f8456a4c00892b86634e946891f5b1461e175ce5fd4694c98114475e50788591" Jan 22 12:32:08 crc kubenswrapper[5120]: E0122 12:32:08.747374 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8456a4c00892b86634e946891f5b1461e175ce5fd4694c98114475e50788591\": container with ID starting with f8456a4c00892b86634e946891f5b1461e175ce5fd4694c98114475e50788591 not found: ID does not exist" containerID="f8456a4c00892b86634e946891f5b1461e175ce5fd4694c98114475e50788591" Jan 22 12:32:08 crc kubenswrapper[5120]: I0122 12:32:08.747418 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8456a4c00892b86634e946891f5b1461e175ce5fd4694c98114475e50788591"} err="failed to get container status \"f8456a4c00892b86634e946891f5b1461e175ce5fd4694c98114475e50788591\": rpc error: code = NotFound desc = could not find container \"f8456a4c00892b86634e946891f5b1461e175ce5fd4694c98114475e50788591\": container with ID starting with f8456a4c00892b86634e946891f5b1461e175ce5fd4694c98114475e50788591 not found: ID does not exist" Jan 22 12:32:08 crc kubenswrapper[5120]: I0122 12:32:08.747445 5120 scope.go:117] "RemoveContainer" containerID="134af30ce92c4197ef4b359c4bc8c40b396cd3d79ca40996dd5797f8eff9af21" Jan 22 12:32:08 crc kubenswrapper[5120]: E0122 12:32:08.747689 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"134af30ce92c4197ef4b359c4bc8c40b396cd3d79ca40996dd5797f8eff9af21\": container with ID starting with 134af30ce92c4197ef4b359c4bc8c40b396cd3d79ca40996dd5797f8eff9af21 not found: ID does not exist" containerID="134af30ce92c4197ef4b359c4bc8c40b396cd3d79ca40996dd5797f8eff9af21" Jan 22 12:32:08 crc kubenswrapper[5120]: I0122 12:32:08.747729 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"134af30ce92c4197ef4b359c4bc8c40b396cd3d79ca40996dd5797f8eff9af21"} err="failed to get container status \"134af30ce92c4197ef4b359c4bc8c40b396cd3d79ca40996dd5797f8eff9af21\": rpc error: code = NotFound desc = could not find container \"134af30ce92c4197ef4b359c4bc8c40b396cd3d79ca40996dd5797f8eff9af21\": container with ID starting with 134af30ce92c4197ef4b359c4bc8c40b396cd3d79ca40996dd5797f8eff9af21 not found: ID does not exist" Jan 22 12:32:08 crc kubenswrapper[5120]: I0122 12:32:08.747753 5120 scope.go:117] "RemoveContainer" containerID="4f8545fffde05edbe270181f69222f0db3f0b8b5dcc136e595275eb8016ac851" Jan 22 12:32:08 crc kubenswrapper[5120]: E0122 12:32:08.748590 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f8545fffde05edbe270181f69222f0db3f0b8b5dcc136e595275eb8016ac851\": container with ID starting with 4f8545fffde05edbe270181f69222f0db3f0b8b5dcc136e595275eb8016ac851 not found: ID does not exist" containerID="4f8545fffde05edbe270181f69222f0db3f0b8b5dcc136e595275eb8016ac851" Jan 22 12:32:08 crc kubenswrapper[5120]: I0122 12:32:08.748634 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f8545fffde05edbe270181f69222f0db3f0b8b5dcc136e595275eb8016ac851"} err="failed to get container status \"4f8545fffde05edbe270181f69222f0db3f0b8b5dcc136e595275eb8016ac851\": rpc error: code = NotFound desc = could not find container \"4f8545fffde05edbe270181f69222f0db3f0b8b5dcc136e595275eb8016ac851\": container with ID starting with 4f8545fffde05edbe270181f69222f0db3f0b8b5dcc136e595275eb8016ac851 not found: ID does not exist" Jan 22 12:32:09 crc kubenswrapper[5120]: I0122 12:32:09.580931 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad7e51cc-e89a-4bed-b500-3b766d041fd7" path="/var/lib/kubelet/pods/ad7e51cc-e89a-4bed-b500-3b766d041fd7/volumes" Jan 22 12:32:11 crc kubenswrapper[5120]: I0122 12:32:11.586168 5120 scope.go:117] "RemoveContainer" containerID="f2643d6719d898899b7fe441e6374794306e1af141db7ee92ac8d42af384da07" Jan 22 12:32:11 crc kubenswrapper[5120]: E0122 12:32:11.586827 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:32:25 crc kubenswrapper[5120]: I0122 12:32:25.581523 5120 scope.go:117] "RemoveContainer" containerID="f2643d6719d898899b7fe441e6374794306e1af141db7ee92ac8d42af384da07" Jan 22 12:32:25 crc kubenswrapper[5120]: E0122 12:32:25.585322 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:32:37 crc kubenswrapper[5120]: I0122 12:32:37.580233 5120 scope.go:117] "RemoveContainer" containerID="f2643d6719d898899b7fe441e6374794306e1af141db7ee92ac8d42af384da07" Jan 22 12:32:37 crc kubenswrapper[5120]: E0122 12:32:37.580762 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:32:49 crc kubenswrapper[5120]: I0122 12:32:49.588126 5120 scope.go:117] "RemoveContainer" containerID="f2643d6719d898899b7fe441e6374794306e1af141db7ee92ac8d42af384da07" Jan 22 12:32:49 crc kubenswrapper[5120]: E0122 12:32:49.589291 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:32:51 crc kubenswrapper[5120]: I0122 12:32:51.764278 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4lzht_67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087/kube-multus/0.log" Jan 22 12:32:51 crc kubenswrapper[5120]: I0122 12:32:51.768086 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4lzht_67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087/kube-multus/0.log" Jan 22 12:32:51 crc kubenswrapper[5120]: I0122 12:32:51.778272 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 22 12:32:51 crc kubenswrapper[5120]: I0122 12:32:51.780170 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 22 12:33:00 crc kubenswrapper[5120]: I0122 12:33:00.572722 5120 scope.go:117] "RemoveContainer" containerID="f2643d6719d898899b7fe441e6374794306e1af141db7ee92ac8d42af384da07" Jan 22 12:33:00 crc kubenswrapper[5120]: E0122 12:33:00.574320 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:33:03 crc kubenswrapper[5120]: I0122 12:33:03.046066 5120 scope.go:117] "RemoveContainer" containerID="6b1f924c30425523c67b96c181b3a024d387e63c67e79368cd5fa28556694ba7" Jan 22 12:33:15 crc kubenswrapper[5120]: I0122 12:33:15.586166 5120 scope.go:117] "RemoveContainer" containerID="f2643d6719d898899b7fe441e6374794306e1af141db7ee92ac8d42af384da07" Jan 22 12:33:15 crc kubenswrapper[5120]: E0122 12:33:15.587667 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:33:26 crc kubenswrapper[5120]: I0122 12:33:26.572692 5120 scope.go:117] "RemoveContainer" containerID="f2643d6719d898899b7fe441e6374794306e1af141db7ee92ac8d42af384da07" Jan 22 12:33:26 crc kubenswrapper[5120]: E0122 12:33:26.574003 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:33:40 crc kubenswrapper[5120]: I0122 12:33:40.572300 5120 scope.go:117] "RemoveContainer" containerID="f2643d6719d898899b7fe441e6374794306e1af141db7ee92ac8d42af384da07" Jan 22 12:33:40 crc kubenswrapper[5120]: E0122 12:33:40.573304 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:33:51 crc kubenswrapper[5120]: I0122 12:33:51.572737 5120 scope.go:117] "RemoveContainer" containerID="f2643d6719d898899b7fe441e6374794306e1af141db7ee92ac8d42af384da07" Jan 22 12:33:51 crc kubenswrapper[5120]: E0122 12:33:51.574030 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:34:00 crc kubenswrapper[5120]: I0122 12:34:00.155124 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29484754-fgcqw"] Jan 22 12:34:00 crc kubenswrapper[5120]: I0122 12:34:00.157177 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ad7e51cc-e89a-4bed-b500-3b766d041fd7" containerName="extract-utilities" Jan 22 12:34:00 crc kubenswrapper[5120]: I0122 12:34:00.157206 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad7e51cc-e89a-4bed-b500-3b766d041fd7" containerName="extract-utilities" Jan 22 12:34:00 crc kubenswrapper[5120]: I0122 12:34:00.157269 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ad7e51cc-e89a-4bed-b500-3b766d041fd7" containerName="registry-server" Jan 22 12:34:00 crc kubenswrapper[5120]: I0122 12:34:00.157282 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad7e51cc-e89a-4bed-b500-3b766d041fd7" containerName="registry-server" Jan 22 12:34:00 crc kubenswrapper[5120]: I0122 12:34:00.157304 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ad7e51cc-e89a-4bed-b500-3b766d041fd7" containerName="extract-content" Jan 22 12:34:00 crc kubenswrapper[5120]: I0122 12:34:00.157316 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad7e51cc-e89a-4bed-b500-3b766d041fd7" containerName="extract-content" Jan 22 12:34:00 crc kubenswrapper[5120]: I0122 12:34:00.157349 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7943991c-5c7d-4a50-80ac-42d7eb0f624f" containerName="oc" Jan 22 12:34:00 crc kubenswrapper[5120]: I0122 12:34:00.157362 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="7943991c-5c7d-4a50-80ac-42d7eb0f624f" containerName="oc" Jan 22 12:34:00 crc kubenswrapper[5120]: I0122 12:34:00.157562 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="7943991c-5c7d-4a50-80ac-42d7eb0f624f" containerName="oc" Jan 22 12:34:00 crc kubenswrapper[5120]: I0122 12:34:00.157591 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="ad7e51cc-e89a-4bed-b500-3b766d041fd7" containerName="registry-server" Jan 22 12:34:00 crc kubenswrapper[5120]: I0122 12:34:00.175864 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484754-fgcqw"] Jan 22 12:34:00 crc kubenswrapper[5120]: I0122 12:34:00.176112 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484754-fgcqw" Jan 22 12:34:00 crc kubenswrapper[5120]: I0122 12:34:00.178494 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g2chw\"" Jan 22 12:34:00 crc kubenswrapper[5120]: I0122 12:34:00.179531 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 22 12:34:00 crc kubenswrapper[5120]: I0122 12:34:00.180552 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 22 12:34:00 crc kubenswrapper[5120]: I0122 12:34:00.375035 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tldcj\" (UniqueName: \"kubernetes.io/projected/18ea5adf-2b29-46ff-8c49-515dd1615879-kube-api-access-tldcj\") pod \"auto-csr-approver-29484754-fgcqw\" (UID: \"18ea5adf-2b29-46ff-8c49-515dd1615879\") " pod="openshift-infra/auto-csr-approver-29484754-fgcqw" Jan 22 12:34:00 crc kubenswrapper[5120]: I0122 12:34:00.476879 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tldcj\" (UniqueName: \"kubernetes.io/projected/18ea5adf-2b29-46ff-8c49-515dd1615879-kube-api-access-tldcj\") pod \"auto-csr-approver-29484754-fgcqw\" (UID: \"18ea5adf-2b29-46ff-8c49-515dd1615879\") " pod="openshift-infra/auto-csr-approver-29484754-fgcqw" Jan 22 12:34:00 crc kubenswrapper[5120]: I0122 12:34:00.501861 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tldcj\" (UniqueName: \"kubernetes.io/projected/18ea5adf-2b29-46ff-8c49-515dd1615879-kube-api-access-tldcj\") pod \"auto-csr-approver-29484754-fgcqw\" (UID: \"18ea5adf-2b29-46ff-8c49-515dd1615879\") " pod="openshift-infra/auto-csr-approver-29484754-fgcqw" Jan 22 12:34:00 crc kubenswrapper[5120]: I0122 12:34:00.508521 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484754-fgcqw" Jan 22 12:34:00 crc kubenswrapper[5120]: I0122 12:34:00.751139 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484754-fgcqw"] Jan 22 12:34:00 crc kubenswrapper[5120]: I0122 12:34:00.858765 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484754-fgcqw" event={"ID":"18ea5adf-2b29-46ff-8c49-515dd1615879","Type":"ContainerStarted","Data":"30fe1094d8840610048424f95f1d95c399d602663b392ffad2f6d142d13ecb46"} Jan 22 12:34:02 crc kubenswrapper[5120]: I0122 12:34:02.877749 5120 generic.go:358] "Generic (PLEG): container finished" podID="18ea5adf-2b29-46ff-8c49-515dd1615879" containerID="f0addba7235b3cf2978323be2668d57256d6e16bc46c625f5d2101670fd5355e" exitCode=0 Jan 22 12:34:02 crc kubenswrapper[5120]: I0122 12:34:02.878285 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484754-fgcqw" event={"ID":"18ea5adf-2b29-46ff-8c49-515dd1615879","Type":"ContainerDied","Data":"f0addba7235b3cf2978323be2668d57256d6e16bc46c625f5d2101670fd5355e"} Jan 22 12:34:04 crc kubenswrapper[5120]: I0122 12:34:04.233911 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484754-fgcqw" Jan 22 12:34:04 crc kubenswrapper[5120]: I0122 12:34:04.334391 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tldcj\" (UniqueName: \"kubernetes.io/projected/18ea5adf-2b29-46ff-8c49-515dd1615879-kube-api-access-tldcj\") pod \"18ea5adf-2b29-46ff-8c49-515dd1615879\" (UID: \"18ea5adf-2b29-46ff-8c49-515dd1615879\") " Jan 22 12:34:04 crc kubenswrapper[5120]: I0122 12:34:04.358144 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18ea5adf-2b29-46ff-8c49-515dd1615879-kube-api-access-tldcj" (OuterVolumeSpecName: "kube-api-access-tldcj") pod "18ea5adf-2b29-46ff-8c49-515dd1615879" (UID: "18ea5adf-2b29-46ff-8c49-515dd1615879"). InnerVolumeSpecName "kube-api-access-tldcj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:34:04 crc kubenswrapper[5120]: I0122 12:34:04.436768 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tldcj\" (UniqueName: \"kubernetes.io/projected/18ea5adf-2b29-46ff-8c49-515dd1615879-kube-api-access-tldcj\") on node \"crc\" DevicePath \"\"" Jan 22 12:34:04 crc kubenswrapper[5120]: I0122 12:34:04.899113 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484754-fgcqw" Jan 22 12:34:04 crc kubenswrapper[5120]: I0122 12:34:04.899170 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484754-fgcqw" event={"ID":"18ea5adf-2b29-46ff-8c49-515dd1615879","Type":"ContainerDied","Data":"30fe1094d8840610048424f95f1d95c399d602663b392ffad2f6d142d13ecb46"} Jan 22 12:34:04 crc kubenswrapper[5120]: I0122 12:34:04.899221 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30fe1094d8840610048424f95f1d95c399d602663b392ffad2f6d142d13ecb46" Jan 22 12:34:05 crc kubenswrapper[5120]: I0122 12:34:05.344336 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29484748-kjqpj"] Jan 22 12:34:05 crc kubenswrapper[5120]: I0122 12:34:05.356482 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29484748-kjqpj"] Jan 22 12:34:05 crc kubenswrapper[5120]: I0122 12:34:05.584933 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19e6cf90-948d-4188-8603-4f42f5a2400e" path="/var/lib/kubelet/pods/19e6cf90-948d-4188-8603-4f42f5a2400e/volumes" Jan 22 12:34:05 crc kubenswrapper[5120]: I0122 12:34:05.586605 5120 scope.go:117] "RemoveContainer" containerID="f2643d6719d898899b7fe441e6374794306e1af141db7ee92ac8d42af384da07" Jan 22 12:34:05 crc kubenswrapper[5120]: I0122 12:34:05.912638 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" event={"ID":"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9","Type":"ContainerStarted","Data":"1b41c7747b82f18e38fe4a73127e6bf34587d1370adab02c57f7c18e148832ca"} Jan 22 12:34:14 crc kubenswrapper[5120]: I0122 12:34:14.886168 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4r8gt"] Jan 22 12:34:14 crc kubenswrapper[5120]: I0122 12:34:14.887876 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="18ea5adf-2b29-46ff-8c49-515dd1615879" containerName="oc" Jan 22 12:34:14 crc kubenswrapper[5120]: I0122 12:34:14.887899 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="18ea5adf-2b29-46ff-8c49-515dd1615879" containerName="oc" Jan 22 12:34:14 crc kubenswrapper[5120]: I0122 12:34:14.888727 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="18ea5adf-2b29-46ff-8c49-515dd1615879" containerName="oc" Jan 22 12:34:14 crc kubenswrapper[5120]: I0122 12:34:14.895723 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4r8gt" Jan 22 12:34:14 crc kubenswrapper[5120]: I0122 12:34:14.918929 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4r8gt"] Jan 22 12:34:15 crc kubenswrapper[5120]: I0122 12:34:15.046746 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3f4d8fb-8397-476e-8903-7e5968484c8d-utilities\") pod \"community-operators-4r8gt\" (UID: \"d3f4d8fb-8397-476e-8903-7e5968484c8d\") " pod="openshift-marketplace/community-operators-4r8gt" Jan 22 12:34:15 crc kubenswrapper[5120]: I0122 12:34:15.046948 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjlpc\" (UniqueName: \"kubernetes.io/projected/d3f4d8fb-8397-476e-8903-7e5968484c8d-kube-api-access-tjlpc\") pod \"community-operators-4r8gt\" (UID: \"d3f4d8fb-8397-476e-8903-7e5968484c8d\") " pod="openshift-marketplace/community-operators-4r8gt" Jan 22 12:34:15 crc kubenswrapper[5120]: I0122 12:34:15.047077 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3f4d8fb-8397-476e-8903-7e5968484c8d-catalog-content\") pod \"community-operators-4r8gt\" (UID: \"d3f4d8fb-8397-476e-8903-7e5968484c8d\") " pod="openshift-marketplace/community-operators-4r8gt" Jan 22 12:34:15 crc kubenswrapper[5120]: I0122 12:34:15.148265 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3f4d8fb-8397-476e-8903-7e5968484c8d-utilities\") pod \"community-operators-4r8gt\" (UID: \"d3f4d8fb-8397-476e-8903-7e5968484c8d\") " pod="openshift-marketplace/community-operators-4r8gt" Jan 22 12:34:15 crc kubenswrapper[5120]: I0122 12:34:15.148361 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tjlpc\" (UniqueName: \"kubernetes.io/projected/d3f4d8fb-8397-476e-8903-7e5968484c8d-kube-api-access-tjlpc\") pod \"community-operators-4r8gt\" (UID: \"d3f4d8fb-8397-476e-8903-7e5968484c8d\") " pod="openshift-marketplace/community-operators-4r8gt" Jan 22 12:34:15 crc kubenswrapper[5120]: I0122 12:34:15.148749 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3f4d8fb-8397-476e-8903-7e5968484c8d-catalog-content\") pod \"community-operators-4r8gt\" (UID: \"d3f4d8fb-8397-476e-8903-7e5968484c8d\") " pod="openshift-marketplace/community-operators-4r8gt" Jan 22 12:34:15 crc kubenswrapper[5120]: I0122 12:34:15.148857 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3f4d8fb-8397-476e-8903-7e5968484c8d-utilities\") pod \"community-operators-4r8gt\" (UID: \"d3f4d8fb-8397-476e-8903-7e5968484c8d\") " pod="openshift-marketplace/community-operators-4r8gt" Jan 22 12:34:15 crc kubenswrapper[5120]: I0122 12:34:15.149144 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3f4d8fb-8397-476e-8903-7e5968484c8d-catalog-content\") pod \"community-operators-4r8gt\" (UID: \"d3f4d8fb-8397-476e-8903-7e5968484c8d\") " pod="openshift-marketplace/community-operators-4r8gt" Jan 22 12:34:15 crc kubenswrapper[5120]: I0122 12:34:15.171863 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjlpc\" (UniqueName: \"kubernetes.io/projected/d3f4d8fb-8397-476e-8903-7e5968484c8d-kube-api-access-tjlpc\") pod \"community-operators-4r8gt\" (UID: \"d3f4d8fb-8397-476e-8903-7e5968484c8d\") " pod="openshift-marketplace/community-operators-4r8gt" Jan 22 12:34:15 crc kubenswrapper[5120]: I0122 12:34:15.274364 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4r8gt" Jan 22 12:34:15 crc kubenswrapper[5120]: I0122 12:34:15.749348 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4r8gt"] Jan 22 12:34:15 crc kubenswrapper[5120]: W0122 12:34:15.767036 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3f4d8fb_8397_476e_8903_7e5968484c8d.slice/crio-3181195e7a0d7600212f0c844db0ed360b1c6eb7ff74543f2c8d582e0854b6d6 WatchSource:0}: Error finding container 3181195e7a0d7600212f0c844db0ed360b1c6eb7ff74543f2c8d582e0854b6d6: Status 404 returned error can't find the container with id 3181195e7a0d7600212f0c844db0ed360b1c6eb7ff74543f2c8d582e0854b6d6 Jan 22 12:34:16 crc kubenswrapper[5120]: I0122 12:34:16.018408 5120 generic.go:358] "Generic (PLEG): container finished" podID="d3f4d8fb-8397-476e-8903-7e5968484c8d" containerID="f5da2b4573f411fd1c7ba90d5135b3b3ad5e589b9d33a895f84d5f73a9a42812" exitCode=0 Jan 22 12:34:16 crc kubenswrapper[5120]: I0122 12:34:16.018534 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4r8gt" event={"ID":"d3f4d8fb-8397-476e-8903-7e5968484c8d","Type":"ContainerDied","Data":"f5da2b4573f411fd1c7ba90d5135b3b3ad5e589b9d33a895f84d5f73a9a42812"} Jan 22 12:34:16 crc kubenswrapper[5120]: I0122 12:34:16.018575 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4r8gt" event={"ID":"d3f4d8fb-8397-476e-8903-7e5968484c8d","Type":"ContainerStarted","Data":"3181195e7a0d7600212f0c844db0ed360b1c6eb7ff74543f2c8d582e0854b6d6"} Jan 22 12:34:17 crc kubenswrapper[5120]: I0122 12:34:17.029313 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4r8gt" event={"ID":"d3f4d8fb-8397-476e-8903-7e5968484c8d","Type":"ContainerStarted","Data":"bc1c304dba40ea6c1f51b91a5a191cf01626bbc05f10ea5b13f3abdba4bbb033"} Jan 22 12:34:18 crc kubenswrapper[5120]: I0122 12:34:18.052333 5120 generic.go:358] "Generic (PLEG): container finished" podID="d3f4d8fb-8397-476e-8903-7e5968484c8d" containerID="bc1c304dba40ea6c1f51b91a5a191cf01626bbc05f10ea5b13f3abdba4bbb033" exitCode=0 Jan 22 12:34:18 crc kubenswrapper[5120]: I0122 12:34:18.052559 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4r8gt" event={"ID":"d3f4d8fb-8397-476e-8903-7e5968484c8d","Type":"ContainerDied","Data":"bc1c304dba40ea6c1f51b91a5a191cf01626bbc05f10ea5b13f3abdba4bbb033"} Jan 22 12:34:19 crc kubenswrapper[5120]: I0122 12:34:19.064754 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4r8gt" event={"ID":"d3f4d8fb-8397-476e-8903-7e5968484c8d","Type":"ContainerStarted","Data":"0241f1c43a588c15860e0e90f0860c84485bd91f1b4b322c04cf94c13d5eddf0"} Jan 22 12:34:25 crc kubenswrapper[5120]: I0122 12:34:25.275213 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-4r8gt" Jan 22 12:34:25 crc kubenswrapper[5120]: I0122 12:34:25.277717 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4r8gt" Jan 22 12:34:25 crc kubenswrapper[5120]: I0122 12:34:25.341589 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4r8gt" Jan 22 12:34:25 crc kubenswrapper[5120]: I0122 12:34:25.386044 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4r8gt" podStartSLOduration=10.801212412 podStartE2EDuration="11.386009847s" podCreationTimestamp="2026-01-22 12:34:14 +0000 UTC" firstStartedPulling="2026-01-22 12:34:16.021287656 +0000 UTC m=+2790.765236037" lastFinishedPulling="2026-01-22 12:34:16.606085101 +0000 UTC m=+2791.350033472" observedRunningTime="2026-01-22 12:34:19.088850112 +0000 UTC m=+2793.832798463" watchObservedRunningTime="2026-01-22 12:34:25.386009847 +0000 UTC m=+2800.129958248" Jan 22 12:34:26 crc kubenswrapper[5120]: I0122 12:34:26.205814 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4r8gt" Jan 22 12:34:26 crc kubenswrapper[5120]: I0122 12:34:26.258159 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4r8gt"] Jan 22 12:34:28 crc kubenswrapper[5120]: I0122 12:34:28.156559 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4r8gt" podUID="d3f4d8fb-8397-476e-8903-7e5968484c8d" containerName="registry-server" containerID="cri-o://0241f1c43a588c15860e0e90f0860c84485bd91f1b4b322c04cf94c13d5eddf0" gracePeriod=2 Jan 22 12:34:29 crc kubenswrapper[5120]: I0122 12:34:29.048404 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4r8gt" Jan 22 12:34:29 crc kubenswrapper[5120]: I0122 12:34:29.165700 5120 generic.go:358] "Generic (PLEG): container finished" podID="d3f4d8fb-8397-476e-8903-7e5968484c8d" containerID="0241f1c43a588c15860e0e90f0860c84485bd91f1b4b322c04cf94c13d5eddf0" exitCode=0 Jan 22 12:34:29 crc kubenswrapper[5120]: I0122 12:34:29.165743 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4r8gt" event={"ID":"d3f4d8fb-8397-476e-8903-7e5968484c8d","Type":"ContainerDied","Data":"0241f1c43a588c15860e0e90f0860c84485bd91f1b4b322c04cf94c13d5eddf0"} Jan 22 12:34:29 crc kubenswrapper[5120]: I0122 12:34:29.165791 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4r8gt" event={"ID":"d3f4d8fb-8397-476e-8903-7e5968484c8d","Type":"ContainerDied","Data":"3181195e7a0d7600212f0c844db0ed360b1c6eb7ff74543f2c8d582e0854b6d6"} Jan 22 12:34:29 crc kubenswrapper[5120]: I0122 12:34:29.165815 5120 scope.go:117] "RemoveContainer" containerID="0241f1c43a588c15860e0e90f0860c84485bd91f1b4b322c04cf94c13d5eddf0" Jan 22 12:34:29 crc kubenswrapper[5120]: I0122 12:34:29.165834 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4r8gt" Jan 22 12:34:29 crc kubenswrapper[5120]: I0122 12:34:29.190316 5120 scope.go:117] "RemoveContainer" containerID="bc1c304dba40ea6c1f51b91a5a191cf01626bbc05f10ea5b13f3abdba4bbb033" Jan 22 12:34:29 crc kubenswrapper[5120]: I0122 12:34:29.211672 5120 scope.go:117] "RemoveContainer" containerID="f5da2b4573f411fd1c7ba90d5135b3b3ad5e589b9d33a895f84d5f73a9a42812" Jan 22 12:34:29 crc kubenswrapper[5120]: I0122 12:34:29.230992 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjlpc\" (UniqueName: \"kubernetes.io/projected/d3f4d8fb-8397-476e-8903-7e5968484c8d-kube-api-access-tjlpc\") pod \"d3f4d8fb-8397-476e-8903-7e5968484c8d\" (UID: \"d3f4d8fb-8397-476e-8903-7e5968484c8d\") " Jan 22 12:34:29 crc kubenswrapper[5120]: I0122 12:34:29.231072 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3f4d8fb-8397-476e-8903-7e5968484c8d-catalog-content\") pod \"d3f4d8fb-8397-476e-8903-7e5968484c8d\" (UID: \"d3f4d8fb-8397-476e-8903-7e5968484c8d\") " Jan 22 12:34:29 crc kubenswrapper[5120]: I0122 12:34:29.231284 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3f4d8fb-8397-476e-8903-7e5968484c8d-utilities\") pod \"d3f4d8fb-8397-476e-8903-7e5968484c8d\" (UID: \"d3f4d8fb-8397-476e-8903-7e5968484c8d\") " Jan 22 12:34:29 crc kubenswrapper[5120]: I0122 12:34:29.232795 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3f4d8fb-8397-476e-8903-7e5968484c8d-utilities" (OuterVolumeSpecName: "utilities") pod "d3f4d8fb-8397-476e-8903-7e5968484c8d" (UID: "d3f4d8fb-8397-476e-8903-7e5968484c8d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:34:29 crc kubenswrapper[5120]: I0122 12:34:29.241276 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3f4d8fb-8397-476e-8903-7e5968484c8d-kube-api-access-tjlpc" (OuterVolumeSpecName: "kube-api-access-tjlpc") pod "d3f4d8fb-8397-476e-8903-7e5968484c8d" (UID: "d3f4d8fb-8397-476e-8903-7e5968484c8d"). InnerVolumeSpecName "kube-api-access-tjlpc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:34:29 crc kubenswrapper[5120]: I0122 12:34:29.285479 5120 scope.go:117] "RemoveContainer" containerID="0241f1c43a588c15860e0e90f0860c84485bd91f1b4b322c04cf94c13d5eddf0" Jan 22 12:34:29 crc kubenswrapper[5120]: E0122 12:34:29.285905 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0241f1c43a588c15860e0e90f0860c84485bd91f1b4b322c04cf94c13d5eddf0\": container with ID starting with 0241f1c43a588c15860e0e90f0860c84485bd91f1b4b322c04cf94c13d5eddf0 not found: ID does not exist" containerID="0241f1c43a588c15860e0e90f0860c84485bd91f1b4b322c04cf94c13d5eddf0" Jan 22 12:34:29 crc kubenswrapper[5120]: I0122 12:34:29.285986 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0241f1c43a588c15860e0e90f0860c84485bd91f1b4b322c04cf94c13d5eddf0"} err="failed to get container status \"0241f1c43a588c15860e0e90f0860c84485bd91f1b4b322c04cf94c13d5eddf0\": rpc error: code = NotFound desc = could not find container \"0241f1c43a588c15860e0e90f0860c84485bd91f1b4b322c04cf94c13d5eddf0\": container with ID starting with 0241f1c43a588c15860e0e90f0860c84485bd91f1b4b322c04cf94c13d5eddf0 not found: ID does not exist" Jan 22 12:34:29 crc kubenswrapper[5120]: I0122 12:34:29.286016 5120 scope.go:117] "RemoveContainer" containerID="bc1c304dba40ea6c1f51b91a5a191cf01626bbc05f10ea5b13f3abdba4bbb033" Jan 22 12:34:29 crc kubenswrapper[5120]: E0122 12:34:29.286375 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc1c304dba40ea6c1f51b91a5a191cf01626bbc05f10ea5b13f3abdba4bbb033\": container with ID starting with bc1c304dba40ea6c1f51b91a5a191cf01626bbc05f10ea5b13f3abdba4bbb033 not found: ID does not exist" containerID="bc1c304dba40ea6c1f51b91a5a191cf01626bbc05f10ea5b13f3abdba4bbb033" Jan 22 12:34:29 crc kubenswrapper[5120]: I0122 12:34:29.286406 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc1c304dba40ea6c1f51b91a5a191cf01626bbc05f10ea5b13f3abdba4bbb033"} err="failed to get container status \"bc1c304dba40ea6c1f51b91a5a191cf01626bbc05f10ea5b13f3abdba4bbb033\": rpc error: code = NotFound desc = could not find container \"bc1c304dba40ea6c1f51b91a5a191cf01626bbc05f10ea5b13f3abdba4bbb033\": container with ID starting with bc1c304dba40ea6c1f51b91a5a191cf01626bbc05f10ea5b13f3abdba4bbb033 not found: ID does not exist" Jan 22 12:34:29 crc kubenswrapper[5120]: I0122 12:34:29.286429 5120 scope.go:117] "RemoveContainer" containerID="f5da2b4573f411fd1c7ba90d5135b3b3ad5e589b9d33a895f84d5f73a9a42812" Jan 22 12:34:29 crc kubenswrapper[5120]: E0122 12:34:29.286644 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5da2b4573f411fd1c7ba90d5135b3b3ad5e589b9d33a895f84d5f73a9a42812\": container with ID starting with f5da2b4573f411fd1c7ba90d5135b3b3ad5e589b9d33a895f84d5f73a9a42812 not found: ID does not exist" containerID="f5da2b4573f411fd1c7ba90d5135b3b3ad5e589b9d33a895f84d5f73a9a42812" Jan 22 12:34:29 crc kubenswrapper[5120]: I0122 12:34:29.286680 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5da2b4573f411fd1c7ba90d5135b3b3ad5e589b9d33a895f84d5f73a9a42812"} err="failed to get container status \"f5da2b4573f411fd1c7ba90d5135b3b3ad5e589b9d33a895f84d5f73a9a42812\": rpc error: code = NotFound desc = could not find container \"f5da2b4573f411fd1c7ba90d5135b3b3ad5e589b9d33a895f84d5f73a9a42812\": container with ID starting with f5da2b4573f411fd1c7ba90d5135b3b3ad5e589b9d33a895f84d5f73a9a42812 not found: ID does not exist" Jan 22 12:34:29 crc kubenswrapper[5120]: I0122 12:34:29.291401 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3f4d8fb-8397-476e-8903-7e5968484c8d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d3f4d8fb-8397-476e-8903-7e5968484c8d" (UID: "d3f4d8fb-8397-476e-8903-7e5968484c8d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:34:29 crc kubenswrapper[5120]: I0122 12:34:29.333104 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3f4d8fb-8397-476e-8903-7e5968484c8d-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 12:34:29 crc kubenswrapper[5120]: I0122 12:34:29.333142 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tjlpc\" (UniqueName: \"kubernetes.io/projected/d3f4d8fb-8397-476e-8903-7e5968484c8d-kube-api-access-tjlpc\") on node \"crc\" DevicePath \"\"" Jan 22 12:34:29 crc kubenswrapper[5120]: I0122 12:34:29.333159 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3f4d8fb-8397-476e-8903-7e5968484c8d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 12:34:29 crc kubenswrapper[5120]: I0122 12:34:29.520115 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4r8gt"] Jan 22 12:34:29 crc kubenswrapper[5120]: I0122 12:34:29.531329 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4r8gt"] Jan 22 12:34:29 crc kubenswrapper[5120]: E0122 12:34:29.573322 5120 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3f4d8fb_8397_476e_8903_7e5968484c8d.slice\": RecentStats: unable to find data in memory cache]" Jan 22 12:34:29 crc kubenswrapper[5120]: I0122 12:34:29.580656 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3f4d8fb-8397-476e-8903-7e5968484c8d" path="/var/lib/kubelet/pods/d3f4d8fb-8397-476e-8903-7e5968484c8d/volumes" Jan 22 12:35:03 crc kubenswrapper[5120]: I0122 12:35:03.247928 5120 scope.go:117] "RemoveContainer" containerID="87dcaa48bc692cdf9ab6041cfa08659e3160bb8e1c6b034284ede8cacd86f655" Jan 22 12:36:00 crc kubenswrapper[5120]: I0122 12:36:00.148465 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29484756-svdvw"] Jan 22 12:36:00 crc kubenswrapper[5120]: I0122 12:36:00.150019 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d3f4d8fb-8397-476e-8903-7e5968484c8d" containerName="extract-utilities" Jan 22 12:36:00 crc kubenswrapper[5120]: I0122 12:36:00.150039 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3f4d8fb-8397-476e-8903-7e5968484c8d" containerName="extract-utilities" Jan 22 12:36:00 crc kubenswrapper[5120]: I0122 12:36:00.150065 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d3f4d8fb-8397-476e-8903-7e5968484c8d" containerName="registry-server" Jan 22 12:36:00 crc kubenswrapper[5120]: I0122 12:36:00.150072 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3f4d8fb-8397-476e-8903-7e5968484c8d" containerName="registry-server" Jan 22 12:36:00 crc kubenswrapper[5120]: I0122 12:36:00.150105 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d3f4d8fb-8397-476e-8903-7e5968484c8d" containerName="extract-content" Jan 22 12:36:00 crc kubenswrapper[5120]: I0122 12:36:00.150113 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3f4d8fb-8397-476e-8903-7e5968484c8d" containerName="extract-content" Jan 22 12:36:00 crc kubenswrapper[5120]: I0122 12:36:00.150292 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="d3f4d8fb-8397-476e-8903-7e5968484c8d" containerName="registry-server" Jan 22 12:36:00 crc kubenswrapper[5120]: I0122 12:36:00.156750 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484756-svdvw"] Jan 22 12:36:00 crc kubenswrapper[5120]: I0122 12:36:00.156857 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484756-svdvw" Jan 22 12:36:00 crc kubenswrapper[5120]: I0122 12:36:00.159462 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g2chw\"" Jan 22 12:36:00 crc kubenswrapper[5120]: I0122 12:36:00.159477 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 22 12:36:00 crc kubenswrapper[5120]: I0122 12:36:00.160409 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 22 12:36:00 crc kubenswrapper[5120]: I0122 12:36:00.208018 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjh4n\" (UniqueName: \"kubernetes.io/projected/823e7d1b-b74d-47c1-967a-fc44dab160b8-kube-api-access-sjh4n\") pod \"auto-csr-approver-29484756-svdvw\" (UID: \"823e7d1b-b74d-47c1-967a-fc44dab160b8\") " pod="openshift-infra/auto-csr-approver-29484756-svdvw" Jan 22 12:36:00 crc kubenswrapper[5120]: I0122 12:36:00.309188 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sjh4n\" (UniqueName: \"kubernetes.io/projected/823e7d1b-b74d-47c1-967a-fc44dab160b8-kube-api-access-sjh4n\") pod \"auto-csr-approver-29484756-svdvw\" (UID: \"823e7d1b-b74d-47c1-967a-fc44dab160b8\") " pod="openshift-infra/auto-csr-approver-29484756-svdvw" Jan 22 12:36:00 crc kubenswrapper[5120]: I0122 12:36:00.337071 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjh4n\" (UniqueName: \"kubernetes.io/projected/823e7d1b-b74d-47c1-967a-fc44dab160b8-kube-api-access-sjh4n\") pod \"auto-csr-approver-29484756-svdvw\" (UID: \"823e7d1b-b74d-47c1-967a-fc44dab160b8\") " pod="openshift-infra/auto-csr-approver-29484756-svdvw" Jan 22 12:36:00 crc kubenswrapper[5120]: I0122 12:36:00.475799 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484756-svdvw" Jan 22 12:36:00 crc kubenswrapper[5120]: I0122 12:36:00.720515 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484756-svdvw"] Jan 22 12:36:01 crc kubenswrapper[5120]: I0122 12:36:01.059289 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484756-svdvw" event={"ID":"823e7d1b-b74d-47c1-967a-fc44dab160b8","Type":"ContainerStarted","Data":"65cc54aa3a190682b61b6ca44a82953d653cbf0b0fd1e50cc9a0f84c99a6b5e6"} Jan 22 12:36:03 crc kubenswrapper[5120]: I0122 12:36:03.077651 5120 generic.go:358] "Generic (PLEG): container finished" podID="823e7d1b-b74d-47c1-967a-fc44dab160b8" containerID="91b445bc688764113fdba4792727a51c31d4ee1ea49e151d6ba316bfc799e5a0" exitCode=0 Jan 22 12:36:03 crc kubenswrapper[5120]: I0122 12:36:03.078210 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484756-svdvw" event={"ID":"823e7d1b-b74d-47c1-967a-fc44dab160b8","Type":"ContainerDied","Data":"91b445bc688764113fdba4792727a51c31d4ee1ea49e151d6ba316bfc799e5a0"} Jan 22 12:36:04 crc kubenswrapper[5120]: I0122 12:36:04.400785 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484756-svdvw" Jan 22 12:36:04 crc kubenswrapper[5120]: I0122 12:36:04.497626 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjh4n\" (UniqueName: \"kubernetes.io/projected/823e7d1b-b74d-47c1-967a-fc44dab160b8-kube-api-access-sjh4n\") pod \"823e7d1b-b74d-47c1-967a-fc44dab160b8\" (UID: \"823e7d1b-b74d-47c1-967a-fc44dab160b8\") " Jan 22 12:36:04 crc kubenswrapper[5120]: I0122 12:36:04.507511 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/823e7d1b-b74d-47c1-967a-fc44dab160b8-kube-api-access-sjh4n" (OuterVolumeSpecName: "kube-api-access-sjh4n") pod "823e7d1b-b74d-47c1-967a-fc44dab160b8" (UID: "823e7d1b-b74d-47c1-967a-fc44dab160b8"). InnerVolumeSpecName "kube-api-access-sjh4n". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:36:04 crc kubenswrapper[5120]: I0122 12:36:04.600317 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sjh4n\" (UniqueName: \"kubernetes.io/projected/823e7d1b-b74d-47c1-967a-fc44dab160b8-kube-api-access-sjh4n\") on node \"crc\" DevicePath \"\"" Jan 22 12:36:05 crc kubenswrapper[5120]: I0122 12:36:05.115430 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484756-svdvw" event={"ID":"823e7d1b-b74d-47c1-967a-fc44dab160b8","Type":"ContainerDied","Data":"65cc54aa3a190682b61b6ca44a82953d653cbf0b0fd1e50cc9a0f84c99a6b5e6"} Jan 22 12:36:05 crc kubenswrapper[5120]: I0122 12:36:05.115494 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65cc54aa3a190682b61b6ca44a82953d653cbf0b0fd1e50cc9a0f84c99a6b5e6" Jan 22 12:36:05 crc kubenswrapper[5120]: I0122 12:36:05.115487 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484756-svdvw" Jan 22 12:36:05 crc kubenswrapper[5120]: I0122 12:36:05.464194 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29484750-sqt7t"] Jan 22 12:36:05 crc kubenswrapper[5120]: I0122 12:36:05.475056 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29484750-sqt7t"] Jan 22 12:36:05 crc kubenswrapper[5120]: I0122 12:36:05.581310 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f073b85-c7cf-489a-8e89-7bf6bc9a2124" path="/var/lib/kubelet/pods/3f073b85-c7cf-489a-8e89-7bf6bc9a2124/volumes" Jan 22 12:36:31 crc kubenswrapper[5120]: I0122 12:36:31.972943 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:36:31 crc kubenswrapper[5120]: I0122 12:36:31.973732 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:37:01 crc kubenswrapper[5120]: I0122 12:37:01.972977 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:37:01 crc kubenswrapper[5120]: I0122 12:37:01.973567 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:37:03 crc kubenswrapper[5120]: I0122 12:37:03.443259 5120 scope.go:117] "RemoveContainer" containerID="e252cf75f043a8d827ee19582fab16cdd6e6b640af539cb8d97f2f626b48055f" Jan 22 12:37:31 crc kubenswrapper[5120]: I0122 12:37:31.972924 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:37:31 crc kubenswrapper[5120]: I0122 12:37:31.974270 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:37:31 crc kubenswrapper[5120]: I0122 12:37:31.974349 5120 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dq269" Jan 22 12:37:31 crc kubenswrapper[5120]: I0122 12:37:31.975524 5120 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1b41c7747b82f18e38fe4a73127e6bf34587d1370adab02c57f7c18e148832ca"} pod="openshift-machine-config-operator/machine-config-daemon-dq269" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 12:37:31 crc kubenswrapper[5120]: I0122 12:37:31.975668 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" containerID="cri-o://1b41c7747b82f18e38fe4a73127e6bf34587d1370adab02c57f7c18e148832ca" gracePeriod=600 Jan 22 12:37:32 crc kubenswrapper[5120]: I0122 12:37:32.116335 5120 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 12:37:32 crc kubenswrapper[5120]: I0122 12:37:32.971661 5120 generic.go:358] "Generic (PLEG): container finished" podID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerID="1b41c7747b82f18e38fe4a73127e6bf34587d1370adab02c57f7c18e148832ca" exitCode=0 Jan 22 12:37:32 crc kubenswrapper[5120]: I0122 12:37:32.971813 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" event={"ID":"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9","Type":"ContainerDied","Data":"1b41c7747b82f18e38fe4a73127e6bf34587d1370adab02c57f7c18e148832ca"} Jan 22 12:37:32 crc kubenswrapper[5120]: I0122 12:37:32.972314 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" event={"ID":"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9","Type":"ContainerStarted","Data":"cc40d4bcc65892547f86eaafc1dc9dcde42f467dc0cf6f78c66127d13693b626"} Jan 22 12:37:32 crc kubenswrapper[5120]: I0122 12:37:32.972358 5120 scope.go:117] "RemoveContainer" containerID="f2643d6719d898899b7fe441e6374794306e1af141db7ee92ac8d42af384da07" Jan 22 12:37:51 crc kubenswrapper[5120]: I0122 12:37:51.901185 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4lzht_67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087/kube-multus/0.log" Jan 22 12:37:51 crc kubenswrapper[5120]: I0122 12:37:51.902853 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4lzht_67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087/kube-multus/0.log" Jan 22 12:37:51 crc kubenswrapper[5120]: I0122 12:37:51.932943 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 22 12:37:51 crc kubenswrapper[5120]: I0122 12:37:51.936642 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 22 12:38:00 crc kubenswrapper[5120]: I0122 12:38:00.149460 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29484758-hjfwt"] Jan 22 12:38:00 crc kubenswrapper[5120]: I0122 12:38:00.151460 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="823e7d1b-b74d-47c1-967a-fc44dab160b8" containerName="oc" Jan 22 12:38:00 crc kubenswrapper[5120]: I0122 12:38:00.151482 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="823e7d1b-b74d-47c1-967a-fc44dab160b8" containerName="oc" Jan 22 12:38:00 crc kubenswrapper[5120]: I0122 12:38:00.151726 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="823e7d1b-b74d-47c1-967a-fc44dab160b8" containerName="oc" Jan 22 12:38:00 crc kubenswrapper[5120]: I0122 12:38:00.224752 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484758-hjfwt"] Jan 22 12:38:00 crc kubenswrapper[5120]: I0122 12:38:00.224886 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484758-hjfwt" Jan 22 12:38:00 crc kubenswrapper[5120]: I0122 12:38:00.227335 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g2chw\"" Jan 22 12:38:00 crc kubenswrapper[5120]: I0122 12:38:00.227644 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 22 12:38:00 crc kubenswrapper[5120]: I0122 12:38:00.228066 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 22 12:38:00 crc kubenswrapper[5120]: I0122 12:38:00.316594 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4vv8\" (UniqueName: \"kubernetes.io/projected/96479bdf-524d-44cf-84b0-0be4a402a317-kube-api-access-f4vv8\") pod \"auto-csr-approver-29484758-hjfwt\" (UID: \"96479bdf-524d-44cf-84b0-0be4a402a317\") " pod="openshift-infra/auto-csr-approver-29484758-hjfwt" Jan 22 12:38:00 crc kubenswrapper[5120]: I0122 12:38:00.418270 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f4vv8\" (UniqueName: \"kubernetes.io/projected/96479bdf-524d-44cf-84b0-0be4a402a317-kube-api-access-f4vv8\") pod \"auto-csr-approver-29484758-hjfwt\" (UID: \"96479bdf-524d-44cf-84b0-0be4a402a317\") " pod="openshift-infra/auto-csr-approver-29484758-hjfwt" Jan 22 12:38:00 crc kubenswrapper[5120]: I0122 12:38:00.451351 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4vv8\" (UniqueName: \"kubernetes.io/projected/96479bdf-524d-44cf-84b0-0be4a402a317-kube-api-access-f4vv8\") pod \"auto-csr-approver-29484758-hjfwt\" (UID: \"96479bdf-524d-44cf-84b0-0be4a402a317\") " pod="openshift-infra/auto-csr-approver-29484758-hjfwt" Jan 22 12:38:00 crc kubenswrapper[5120]: I0122 12:38:00.548879 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484758-hjfwt" Jan 22 12:38:00 crc kubenswrapper[5120]: I0122 12:38:00.777646 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484758-hjfwt"] Jan 22 12:38:00 crc kubenswrapper[5120]: W0122 12:38:00.784297 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod96479bdf_524d_44cf_84b0_0be4a402a317.slice/crio-a927b4a72c5ce00f7af9192252ae6cb7cbb9c1b0fb1dbd61f05cbb3ce843f193 WatchSource:0}: Error finding container a927b4a72c5ce00f7af9192252ae6cb7cbb9c1b0fb1dbd61f05cbb3ce843f193: Status 404 returned error can't find the container with id a927b4a72c5ce00f7af9192252ae6cb7cbb9c1b0fb1dbd61f05cbb3ce843f193 Jan 22 12:38:01 crc kubenswrapper[5120]: I0122 12:38:01.222671 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484758-hjfwt" event={"ID":"96479bdf-524d-44cf-84b0-0be4a402a317","Type":"ContainerStarted","Data":"a927b4a72c5ce00f7af9192252ae6cb7cbb9c1b0fb1dbd61f05cbb3ce843f193"} Jan 22 12:38:03 crc kubenswrapper[5120]: I0122 12:38:03.248550 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484758-hjfwt" event={"ID":"96479bdf-524d-44cf-84b0-0be4a402a317","Type":"ContainerDied","Data":"7a805c4c05ced5399f3bf914b6a245f885524c5c4ac80c4ac8f87f8faa63c41b"} Jan 22 12:38:03 crc kubenswrapper[5120]: I0122 12:38:03.248433 5120 generic.go:358] "Generic (PLEG): container finished" podID="96479bdf-524d-44cf-84b0-0be4a402a317" containerID="7a805c4c05ced5399f3bf914b6a245f885524c5c4ac80c4ac8f87f8faa63c41b" exitCode=0 Jan 22 12:38:04 crc kubenswrapper[5120]: I0122 12:38:04.595146 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484758-hjfwt" Jan 22 12:38:04 crc kubenswrapper[5120]: I0122 12:38:04.617165 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4vv8\" (UniqueName: \"kubernetes.io/projected/96479bdf-524d-44cf-84b0-0be4a402a317-kube-api-access-f4vv8\") pod \"96479bdf-524d-44cf-84b0-0be4a402a317\" (UID: \"96479bdf-524d-44cf-84b0-0be4a402a317\") " Jan 22 12:38:04 crc kubenswrapper[5120]: I0122 12:38:04.623172 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96479bdf-524d-44cf-84b0-0be4a402a317-kube-api-access-f4vv8" (OuterVolumeSpecName: "kube-api-access-f4vv8") pod "96479bdf-524d-44cf-84b0-0be4a402a317" (UID: "96479bdf-524d-44cf-84b0-0be4a402a317"). InnerVolumeSpecName "kube-api-access-f4vv8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:38:04 crc kubenswrapper[5120]: I0122 12:38:04.718612 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-f4vv8\" (UniqueName: \"kubernetes.io/projected/96479bdf-524d-44cf-84b0-0be4a402a317-kube-api-access-f4vv8\") on node \"crc\" DevicePath \"\"" Jan 22 12:38:05 crc kubenswrapper[5120]: I0122 12:38:05.269590 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484758-hjfwt" Jan 22 12:38:05 crc kubenswrapper[5120]: I0122 12:38:05.269617 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484758-hjfwt" event={"ID":"96479bdf-524d-44cf-84b0-0be4a402a317","Type":"ContainerDied","Data":"a927b4a72c5ce00f7af9192252ae6cb7cbb9c1b0fb1dbd61f05cbb3ce843f193"} Jan 22 12:38:05 crc kubenswrapper[5120]: I0122 12:38:05.269679 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a927b4a72c5ce00f7af9192252ae6cb7cbb9c1b0fb1dbd61f05cbb3ce843f193" Jan 22 12:38:05 crc kubenswrapper[5120]: I0122 12:38:05.670737 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29484752-v5hcs"] Jan 22 12:38:05 crc kubenswrapper[5120]: I0122 12:38:05.678242 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29484752-v5hcs"] Jan 22 12:38:07 crc kubenswrapper[5120]: I0122 12:38:07.581595 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7943991c-5c7d-4a50-80ac-42d7eb0f624f" path="/var/lib/kubelet/pods/7943991c-5c7d-4a50-80ac-42d7eb0f624f/volumes" Jan 22 12:39:03 crc kubenswrapper[5120]: I0122 12:39:03.608865 5120 scope.go:117] "RemoveContainer" containerID="2871f40e4381a68e2190c46528c45a6f62b9393512cbac4263f64ed579203e6a" Jan 22 12:40:00 crc kubenswrapper[5120]: I0122 12:40:00.147937 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29484760-9gmsd"] Jan 22 12:40:00 crc kubenswrapper[5120]: I0122 12:40:00.150754 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="96479bdf-524d-44cf-84b0-0be4a402a317" containerName="oc" Jan 22 12:40:00 crc kubenswrapper[5120]: I0122 12:40:00.150882 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="96479bdf-524d-44cf-84b0-0be4a402a317" containerName="oc" Jan 22 12:40:00 crc kubenswrapper[5120]: I0122 12:40:00.151099 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="96479bdf-524d-44cf-84b0-0be4a402a317" containerName="oc" Jan 22 12:40:00 crc kubenswrapper[5120]: I0122 12:40:00.203244 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484760-9gmsd"] Jan 22 12:40:00 crc kubenswrapper[5120]: I0122 12:40:00.203424 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484760-9gmsd" Jan 22 12:40:00 crc kubenswrapper[5120]: I0122 12:40:00.208501 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g2chw\"" Jan 22 12:40:00 crc kubenswrapper[5120]: I0122 12:40:00.210072 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 22 12:40:00 crc kubenswrapper[5120]: I0122 12:40:00.210347 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 22 12:40:00 crc kubenswrapper[5120]: I0122 12:40:00.310870 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkb5l\" (UniqueName: \"kubernetes.io/projected/eaee48fe-e9ab-42e2-926c-6d27414eec47-kube-api-access-tkb5l\") pod \"auto-csr-approver-29484760-9gmsd\" (UID: \"eaee48fe-e9ab-42e2-926c-6d27414eec47\") " pod="openshift-infra/auto-csr-approver-29484760-9gmsd" Jan 22 12:40:00 crc kubenswrapper[5120]: I0122 12:40:00.412801 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tkb5l\" (UniqueName: \"kubernetes.io/projected/eaee48fe-e9ab-42e2-926c-6d27414eec47-kube-api-access-tkb5l\") pod \"auto-csr-approver-29484760-9gmsd\" (UID: \"eaee48fe-e9ab-42e2-926c-6d27414eec47\") " pod="openshift-infra/auto-csr-approver-29484760-9gmsd" Jan 22 12:40:00 crc kubenswrapper[5120]: I0122 12:40:00.453296 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkb5l\" (UniqueName: \"kubernetes.io/projected/eaee48fe-e9ab-42e2-926c-6d27414eec47-kube-api-access-tkb5l\") pod \"auto-csr-approver-29484760-9gmsd\" (UID: \"eaee48fe-e9ab-42e2-926c-6d27414eec47\") " pod="openshift-infra/auto-csr-approver-29484760-9gmsd" Jan 22 12:40:00 crc kubenswrapper[5120]: I0122 12:40:00.534253 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484760-9gmsd" Jan 22 12:40:00 crc kubenswrapper[5120]: I0122 12:40:00.764585 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484760-9gmsd"] Jan 22 12:40:01 crc kubenswrapper[5120]: I0122 12:40:01.378380 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484760-9gmsd" event={"ID":"eaee48fe-e9ab-42e2-926c-6d27414eec47","Type":"ContainerStarted","Data":"d93948497a91b689d231c5bce65f008fcb9cc8daa4b86d583f5931af223f8b5d"} Jan 22 12:40:01 crc kubenswrapper[5120]: I0122 12:40:01.974666 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:40:01 crc kubenswrapper[5120]: I0122 12:40:01.974751 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:40:02 crc kubenswrapper[5120]: I0122 12:40:02.388090 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484760-9gmsd" event={"ID":"eaee48fe-e9ab-42e2-926c-6d27414eec47","Type":"ContainerStarted","Data":"d38a722a84b2b8810e74617131f1d0281e3449f071650edfa7fce4122e413c26"} Jan 22 12:40:02 crc kubenswrapper[5120]: I0122 12:40:02.407928 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29484760-9gmsd" podStartSLOduration=1.330582421 podStartE2EDuration="2.407908378s" podCreationTimestamp="2026-01-22 12:40:00 +0000 UTC" firstStartedPulling="2026-01-22 12:40:00.778256493 +0000 UTC m=+3135.522204834" lastFinishedPulling="2026-01-22 12:40:01.85558245 +0000 UTC m=+3136.599530791" observedRunningTime="2026-01-22 12:40:02.401612396 +0000 UTC m=+3137.145560757" watchObservedRunningTime="2026-01-22 12:40:02.407908378 +0000 UTC m=+3137.151856729" Jan 22 12:40:03 crc kubenswrapper[5120]: I0122 12:40:03.403081 5120 generic.go:358] "Generic (PLEG): container finished" podID="eaee48fe-e9ab-42e2-926c-6d27414eec47" containerID="d38a722a84b2b8810e74617131f1d0281e3449f071650edfa7fce4122e413c26" exitCode=0 Jan 22 12:40:03 crc kubenswrapper[5120]: I0122 12:40:03.403312 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484760-9gmsd" event={"ID":"eaee48fe-e9ab-42e2-926c-6d27414eec47","Type":"ContainerDied","Data":"d38a722a84b2b8810e74617131f1d0281e3449f071650edfa7fce4122e413c26"} Jan 22 12:40:04 crc kubenswrapper[5120]: I0122 12:40:04.776172 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484760-9gmsd" Jan 22 12:40:04 crc kubenswrapper[5120]: I0122 12:40:04.910022 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkb5l\" (UniqueName: \"kubernetes.io/projected/eaee48fe-e9ab-42e2-926c-6d27414eec47-kube-api-access-tkb5l\") pod \"eaee48fe-e9ab-42e2-926c-6d27414eec47\" (UID: \"eaee48fe-e9ab-42e2-926c-6d27414eec47\") " Jan 22 12:40:04 crc kubenswrapper[5120]: I0122 12:40:04.931504 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eaee48fe-e9ab-42e2-926c-6d27414eec47-kube-api-access-tkb5l" (OuterVolumeSpecName: "kube-api-access-tkb5l") pod "eaee48fe-e9ab-42e2-926c-6d27414eec47" (UID: "eaee48fe-e9ab-42e2-926c-6d27414eec47"). InnerVolumeSpecName "kube-api-access-tkb5l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:40:05 crc kubenswrapper[5120]: I0122 12:40:05.012718 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tkb5l\" (UniqueName: \"kubernetes.io/projected/eaee48fe-e9ab-42e2-926c-6d27414eec47-kube-api-access-tkb5l\") on node \"crc\" DevicePath \"\"" Jan 22 12:40:05 crc kubenswrapper[5120]: I0122 12:40:05.438027 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484760-9gmsd" event={"ID":"eaee48fe-e9ab-42e2-926c-6d27414eec47","Type":"ContainerDied","Data":"d93948497a91b689d231c5bce65f008fcb9cc8daa4b86d583f5931af223f8b5d"} Jan 22 12:40:05 crc kubenswrapper[5120]: I0122 12:40:05.438385 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d93948497a91b689d231c5bce65f008fcb9cc8daa4b86d583f5931af223f8b5d" Jan 22 12:40:05 crc kubenswrapper[5120]: I0122 12:40:05.438452 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484760-9gmsd" Jan 22 12:40:05 crc kubenswrapper[5120]: I0122 12:40:05.492252 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29484754-fgcqw"] Jan 22 12:40:05 crc kubenswrapper[5120]: I0122 12:40:05.503899 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29484754-fgcqw"] Jan 22 12:40:05 crc kubenswrapper[5120]: I0122 12:40:05.584722 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18ea5adf-2b29-46ff-8c49-515dd1615879" path="/var/lib/kubelet/pods/18ea5adf-2b29-46ff-8c49-515dd1615879/volumes" Jan 22 12:40:05 crc kubenswrapper[5120]: E0122 12:40:05.607365 5120 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeaee48fe_e9ab_42e2_926c_6d27414eec47.slice\": RecentStats: unable to find data in memory cache]" Jan 22 12:40:31 crc kubenswrapper[5120]: I0122 12:40:31.972550 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:40:31 crc kubenswrapper[5120]: I0122 12:40:31.973116 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:41:01 crc kubenswrapper[5120]: I0122 12:41:01.972903 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:41:01 crc kubenswrapper[5120]: I0122 12:41:01.973828 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:41:01 crc kubenswrapper[5120]: I0122 12:41:01.973903 5120 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dq269" Jan 22 12:41:01 crc kubenswrapper[5120]: I0122 12:41:01.974789 5120 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cc40d4bcc65892547f86eaafc1dc9dcde42f467dc0cf6f78c66127d13693b626"} pod="openshift-machine-config-operator/machine-config-daemon-dq269" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 12:41:01 crc kubenswrapper[5120]: I0122 12:41:01.974848 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" containerID="cri-o://cc40d4bcc65892547f86eaafc1dc9dcde42f467dc0cf6f78c66127d13693b626" gracePeriod=600 Jan 22 12:41:02 crc kubenswrapper[5120]: E0122 12:41:02.105113 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:41:02 crc kubenswrapper[5120]: I0122 12:41:02.986858 5120 generic.go:358] "Generic (PLEG): container finished" podID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerID="cc40d4bcc65892547f86eaafc1dc9dcde42f467dc0cf6f78c66127d13693b626" exitCode=0 Jan 22 12:41:02 crc kubenswrapper[5120]: I0122 12:41:02.986926 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" event={"ID":"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9","Type":"ContainerDied","Data":"cc40d4bcc65892547f86eaafc1dc9dcde42f467dc0cf6f78c66127d13693b626"} Jan 22 12:41:02 crc kubenswrapper[5120]: I0122 12:41:02.987032 5120 scope.go:117] "RemoveContainer" containerID="1b41c7747b82f18e38fe4a73127e6bf34587d1370adab02c57f7c18e148832ca" Jan 22 12:41:02 crc kubenswrapper[5120]: I0122 12:41:02.987658 5120 scope.go:117] "RemoveContainer" containerID="cc40d4bcc65892547f86eaafc1dc9dcde42f467dc0cf6f78c66127d13693b626" Jan 22 12:41:02 crc kubenswrapper[5120]: E0122 12:41:02.988168 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:41:03 crc kubenswrapper[5120]: I0122 12:41:03.798436 5120 scope.go:117] "RemoveContainer" containerID="f0addba7235b3cf2978323be2668d57256d6e16bc46c625f5d2101670fd5355e" Jan 22 12:41:17 crc kubenswrapper[5120]: I0122 12:41:17.572646 5120 scope.go:117] "RemoveContainer" containerID="cc40d4bcc65892547f86eaafc1dc9dcde42f467dc0cf6f78c66127d13693b626" Jan 22 12:41:17 crc kubenswrapper[5120]: E0122 12:41:17.574748 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:41:32 crc kubenswrapper[5120]: I0122 12:41:32.571512 5120 scope.go:117] "RemoveContainer" containerID="cc40d4bcc65892547f86eaafc1dc9dcde42f467dc0cf6f78c66127d13693b626" Jan 22 12:41:32 crc kubenswrapper[5120]: E0122 12:41:32.572353 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:41:44 crc kubenswrapper[5120]: I0122 12:41:44.573264 5120 scope.go:117] "RemoveContainer" containerID="cc40d4bcc65892547f86eaafc1dc9dcde42f467dc0cf6f78c66127d13693b626" Jan 22 12:41:44 crc kubenswrapper[5120]: E0122 12:41:44.574201 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:41:57 crc kubenswrapper[5120]: I0122 12:41:57.572823 5120 scope.go:117] "RemoveContainer" containerID="cc40d4bcc65892547f86eaafc1dc9dcde42f467dc0cf6f78c66127d13693b626" Jan 22 12:41:57 crc kubenswrapper[5120]: E0122 12:41:57.574193 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:42:00 crc kubenswrapper[5120]: I0122 12:42:00.165765 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29484762-tjrcq"] Jan 22 12:42:00 crc kubenswrapper[5120]: I0122 12:42:00.167029 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="eaee48fe-e9ab-42e2-926c-6d27414eec47" containerName="oc" Jan 22 12:42:00 crc kubenswrapper[5120]: I0122 12:42:00.167053 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaee48fe-e9ab-42e2-926c-6d27414eec47" containerName="oc" Jan 22 12:42:00 crc kubenswrapper[5120]: I0122 12:42:00.167336 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="eaee48fe-e9ab-42e2-926c-6d27414eec47" containerName="oc" Jan 22 12:42:00 crc kubenswrapper[5120]: I0122 12:42:00.173975 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484762-tjrcq" Jan 22 12:42:00 crc kubenswrapper[5120]: I0122 12:42:00.179037 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 22 12:42:00 crc kubenswrapper[5120]: I0122 12:42:00.179342 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 22 12:42:00 crc kubenswrapper[5120]: I0122 12:42:00.180470 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484762-tjrcq"] Jan 22 12:42:00 crc kubenswrapper[5120]: I0122 12:42:00.182768 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g2chw\"" Jan 22 12:42:00 crc kubenswrapper[5120]: I0122 12:42:00.266747 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcrrj\" (UniqueName: \"kubernetes.io/projected/4579a92b-d731-4627-b131-998575817977-kube-api-access-vcrrj\") pod \"auto-csr-approver-29484762-tjrcq\" (UID: \"4579a92b-d731-4627-b131-998575817977\") " pod="openshift-infra/auto-csr-approver-29484762-tjrcq" Jan 22 12:42:00 crc kubenswrapper[5120]: I0122 12:42:00.368902 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vcrrj\" (UniqueName: \"kubernetes.io/projected/4579a92b-d731-4627-b131-998575817977-kube-api-access-vcrrj\") pod \"auto-csr-approver-29484762-tjrcq\" (UID: \"4579a92b-d731-4627-b131-998575817977\") " pod="openshift-infra/auto-csr-approver-29484762-tjrcq" Jan 22 12:42:00 crc kubenswrapper[5120]: I0122 12:42:00.415495 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcrrj\" (UniqueName: \"kubernetes.io/projected/4579a92b-d731-4627-b131-998575817977-kube-api-access-vcrrj\") pod \"auto-csr-approver-29484762-tjrcq\" (UID: \"4579a92b-d731-4627-b131-998575817977\") " pod="openshift-infra/auto-csr-approver-29484762-tjrcq" Jan 22 12:42:00 crc kubenswrapper[5120]: I0122 12:42:00.509081 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484762-tjrcq" Jan 22 12:42:00 crc kubenswrapper[5120]: I0122 12:42:00.812642 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484762-tjrcq"] Jan 22 12:42:01 crc kubenswrapper[5120]: I0122 12:42:01.543943 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484762-tjrcq" event={"ID":"4579a92b-d731-4627-b131-998575817977","Type":"ContainerStarted","Data":"d91c96097f50254de6de87efea68a909d35cf8ebb31471dd58a494b632e595ee"} Jan 22 12:42:02 crc kubenswrapper[5120]: I0122 12:42:02.559610 5120 generic.go:358] "Generic (PLEG): container finished" podID="4579a92b-d731-4627-b131-998575817977" containerID="2a6f5b0d983a897bcecca87bafc7ac00eaf5f0a889d5650209a6e10cf38669b5" exitCode=0 Jan 22 12:42:02 crc kubenswrapper[5120]: I0122 12:42:02.559708 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484762-tjrcq" event={"ID":"4579a92b-d731-4627-b131-998575817977","Type":"ContainerDied","Data":"2a6f5b0d983a897bcecca87bafc7ac00eaf5f0a889d5650209a6e10cf38669b5"} Jan 22 12:42:03 crc kubenswrapper[5120]: I0122 12:42:03.819089 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484762-tjrcq" Jan 22 12:42:03 crc kubenswrapper[5120]: I0122 12:42:03.924369 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcrrj\" (UniqueName: \"kubernetes.io/projected/4579a92b-d731-4627-b131-998575817977-kube-api-access-vcrrj\") pod \"4579a92b-d731-4627-b131-998575817977\" (UID: \"4579a92b-d731-4627-b131-998575817977\") " Jan 22 12:42:03 crc kubenswrapper[5120]: I0122 12:42:03.948546 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4579a92b-d731-4627-b131-998575817977-kube-api-access-vcrrj" (OuterVolumeSpecName: "kube-api-access-vcrrj") pod "4579a92b-d731-4627-b131-998575817977" (UID: "4579a92b-d731-4627-b131-998575817977"). InnerVolumeSpecName "kube-api-access-vcrrj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:42:04 crc kubenswrapper[5120]: I0122 12:42:04.026826 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vcrrj\" (UniqueName: \"kubernetes.io/projected/4579a92b-d731-4627-b131-998575817977-kube-api-access-vcrrj\") on node \"crc\" DevicePath \"\"" Jan 22 12:42:04 crc kubenswrapper[5120]: I0122 12:42:04.576505 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484762-tjrcq" Jan 22 12:42:04 crc kubenswrapper[5120]: I0122 12:42:04.576525 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484762-tjrcq" event={"ID":"4579a92b-d731-4627-b131-998575817977","Type":"ContainerDied","Data":"d91c96097f50254de6de87efea68a909d35cf8ebb31471dd58a494b632e595ee"} Jan 22 12:42:04 crc kubenswrapper[5120]: I0122 12:42:04.577593 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d91c96097f50254de6de87efea68a909d35cf8ebb31471dd58a494b632e595ee" Jan 22 12:42:04 crc kubenswrapper[5120]: I0122 12:42:04.893878 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29484756-svdvw"] Jan 22 12:42:04 crc kubenswrapper[5120]: I0122 12:42:04.899775 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29484756-svdvw"] Jan 22 12:42:05 crc kubenswrapper[5120]: I0122 12:42:05.590788 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="823e7d1b-b74d-47c1-967a-fc44dab160b8" path="/var/lib/kubelet/pods/823e7d1b-b74d-47c1-967a-fc44dab160b8/volumes" Jan 22 12:42:08 crc kubenswrapper[5120]: I0122 12:42:08.572387 5120 scope.go:117] "RemoveContainer" containerID="cc40d4bcc65892547f86eaafc1dc9dcde42f467dc0cf6f78c66127d13693b626" Jan 22 12:42:08 crc kubenswrapper[5120]: E0122 12:42:08.573299 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:42:10 crc kubenswrapper[5120]: I0122 12:42:10.182947 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-s2r5j"] Jan 22 12:42:10 crc kubenswrapper[5120]: I0122 12:42:10.184170 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4579a92b-d731-4627-b131-998575817977" containerName="oc" Jan 22 12:42:10 crc kubenswrapper[5120]: I0122 12:42:10.184184 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="4579a92b-d731-4627-b131-998575817977" containerName="oc" Jan 22 12:42:10 crc kubenswrapper[5120]: I0122 12:42:10.184303 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="4579a92b-d731-4627-b131-998575817977" containerName="oc" Jan 22 12:42:10 crc kubenswrapper[5120]: I0122 12:42:10.202457 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s2r5j" Jan 22 12:42:10 crc kubenswrapper[5120]: I0122 12:42:10.202666 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s2r5j"] Jan 22 12:42:10 crc kubenswrapper[5120]: I0122 12:42:10.247556 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25l4h\" (UniqueName: \"kubernetes.io/projected/008de9a1-3447-4f73-ab0e-f1b6d234a1de-kube-api-access-25l4h\") pod \"redhat-operators-s2r5j\" (UID: \"008de9a1-3447-4f73-ab0e-f1b6d234a1de\") " pod="openshift-marketplace/redhat-operators-s2r5j" Jan 22 12:42:10 crc kubenswrapper[5120]: I0122 12:42:10.247936 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/008de9a1-3447-4f73-ab0e-f1b6d234a1de-catalog-content\") pod \"redhat-operators-s2r5j\" (UID: \"008de9a1-3447-4f73-ab0e-f1b6d234a1de\") " pod="openshift-marketplace/redhat-operators-s2r5j" Jan 22 12:42:10 crc kubenswrapper[5120]: I0122 12:42:10.248149 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/008de9a1-3447-4f73-ab0e-f1b6d234a1de-utilities\") pod \"redhat-operators-s2r5j\" (UID: \"008de9a1-3447-4f73-ab0e-f1b6d234a1de\") " pod="openshift-marketplace/redhat-operators-s2r5j" Jan 22 12:42:10 crc kubenswrapper[5120]: I0122 12:42:10.350202 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-25l4h\" (UniqueName: \"kubernetes.io/projected/008de9a1-3447-4f73-ab0e-f1b6d234a1de-kube-api-access-25l4h\") pod \"redhat-operators-s2r5j\" (UID: \"008de9a1-3447-4f73-ab0e-f1b6d234a1de\") " pod="openshift-marketplace/redhat-operators-s2r5j" Jan 22 12:42:10 crc kubenswrapper[5120]: I0122 12:42:10.350515 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/008de9a1-3447-4f73-ab0e-f1b6d234a1de-catalog-content\") pod \"redhat-operators-s2r5j\" (UID: \"008de9a1-3447-4f73-ab0e-f1b6d234a1de\") " pod="openshift-marketplace/redhat-operators-s2r5j" Jan 22 12:42:10 crc kubenswrapper[5120]: I0122 12:42:10.350621 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/008de9a1-3447-4f73-ab0e-f1b6d234a1de-utilities\") pod \"redhat-operators-s2r5j\" (UID: \"008de9a1-3447-4f73-ab0e-f1b6d234a1de\") " pod="openshift-marketplace/redhat-operators-s2r5j" Jan 22 12:42:10 crc kubenswrapper[5120]: I0122 12:42:10.351497 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/008de9a1-3447-4f73-ab0e-f1b6d234a1de-utilities\") pod \"redhat-operators-s2r5j\" (UID: \"008de9a1-3447-4f73-ab0e-f1b6d234a1de\") " pod="openshift-marketplace/redhat-operators-s2r5j" Jan 22 12:42:10 crc kubenswrapper[5120]: I0122 12:42:10.351728 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/008de9a1-3447-4f73-ab0e-f1b6d234a1de-catalog-content\") pod \"redhat-operators-s2r5j\" (UID: \"008de9a1-3447-4f73-ab0e-f1b6d234a1de\") " pod="openshift-marketplace/redhat-operators-s2r5j" Jan 22 12:42:10 crc kubenswrapper[5120]: I0122 12:42:10.374661 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-25l4h\" (UniqueName: \"kubernetes.io/projected/008de9a1-3447-4f73-ab0e-f1b6d234a1de-kube-api-access-25l4h\") pod \"redhat-operators-s2r5j\" (UID: \"008de9a1-3447-4f73-ab0e-f1b6d234a1de\") " pod="openshift-marketplace/redhat-operators-s2r5j" Jan 22 12:42:10 crc kubenswrapper[5120]: I0122 12:42:10.568705 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s2r5j" Jan 22 12:42:11 crc kubenswrapper[5120]: I0122 12:42:11.034496 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s2r5j"] Jan 22 12:42:11 crc kubenswrapper[5120]: I0122 12:42:11.642045 5120 generic.go:358] "Generic (PLEG): container finished" podID="008de9a1-3447-4f73-ab0e-f1b6d234a1de" containerID="654ca3a909be53269e25fca9d091b52845269cbeb2a88b176cb8a795b00ca727" exitCode=0 Jan 22 12:42:11 crc kubenswrapper[5120]: I0122 12:42:11.642707 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s2r5j" event={"ID":"008de9a1-3447-4f73-ab0e-f1b6d234a1de","Type":"ContainerDied","Data":"654ca3a909be53269e25fca9d091b52845269cbeb2a88b176cb8a795b00ca727"} Jan 22 12:42:11 crc kubenswrapper[5120]: I0122 12:42:11.642746 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s2r5j" event={"ID":"008de9a1-3447-4f73-ab0e-f1b6d234a1de","Type":"ContainerStarted","Data":"274716f9fe261a0e6693c3f5dc6652a5f7b6c5afa3fae9c6eb706d245a939590"} Jan 22 12:42:13 crc kubenswrapper[5120]: I0122 12:42:13.663624 5120 generic.go:358] "Generic (PLEG): container finished" podID="008de9a1-3447-4f73-ab0e-f1b6d234a1de" containerID="fc64e890e47308fa48da847f52304ded8766d2277cb7c3f4eab838179ed516b9" exitCode=0 Jan 22 12:42:13 crc kubenswrapper[5120]: I0122 12:42:13.663677 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s2r5j" event={"ID":"008de9a1-3447-4f73-ab0e-f1b6d234a1de","Type":"ContainerDied","Data":"fc64e890e47308fa48da847f52304ded8766d2277cb7c3f4eab838179ed516b9"} Jan 22 12:42:14 crc kubenswrapper[5120]: I0122 12:42:14.677598 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s2r5j" event={"ID":"008de9a1-3447-4f73-ab0e-f1b6d234a1de","Type":"ContainerStarted","Data":"9f06a2a3fb279baa3556c568717d0d3771fc8977f50818333dc9c9257ea5db63"} Jan 22 12:42:14 crc kubenswrapper[5120]: I0122 12:42:14.711775 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-s2r5j" podStartSLOduration=3.634390657 podStartE2EDuration="4.711742824s" podCreationTimestamp="2026-01-22 12:42:10 +0000 UTC" firstStartedPulling="2026-01-22 12:42:11.643728705 +0000 UTC m=+3266.387677046" lastFinishedPulling="2026-01-22 12:42:12.721080872 +0000 UTC m=+3267.465029213" observedRunningTime="2026-01-22 12:42:14.704576531 +0000 UTC m=+3269.448524882" watchObservedRunningTime="2026-01-22 12:42:14.711742824 +0000 UTC m=+3269.455691235" Jan 22 12:42:20 crc kubenswrapper[5120]: I0122 12:42:20.569942 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-s2r5j" Jan 22 12:42:20 crc kubenswrapper[5120]: I0122 12:42:20.570759 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-s2r5j" Jan 22 12:42:20 crc kubenswrapper[5120]: I0122 12:42:20.654249 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-s2r5j" Jan 22 12:42:20 crc kubenswrapper[5120]: I0122 12:42:20.797726 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-s2r5j" Jan 22 12:42:20 crc kubenswrapper[5120]: I0122 12:42:20.900933 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s2r5j"] Jan 22 12:42:22 crc kubenswrapper[5120]: I0122 12:42:22.748251 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-s2r5j" podUID="008de9a1-3447-4f73-ab0e-f1b6d234a1de" containerName="registry-server" containerID="cri-o://9f06a2a3fb279baa3556c568717d0d3771fc8977f50818333dc9c9257ea5db63" gracePeriod=2 Jan 22 12:42:23 crc kubenswrapper[5120]: I0122 12:42:23.571952 5120 scope.go:117] "RemoveContainer" containerID="cc40d4bcc65892547f86eaafc1dc9dcde42f467dc0cf6f78c66127d13693b626" Jan 22 12:42:23 crc kubenswrapper[5120]: E0122 12:42:23.573987 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:42:23 crc kubenswrapper[5120]: I0122 12:42:23.654507 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s2r5j" Jan 22 12:42:23 crc kubenswrapper[5120]: I0122 12:42:23.775098 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25l4h\" (UniqueName: \"kubernetes.io/projected/008de9a1-3447-4f73-ab0e-f1b6d234a1de-kube-api-access-25l4h\") pod \"008de9a1-3447-4f73-ab0e-f1b6d234a1de\" (UID: \"008de9a1-3447-4f73-ab0e-f1b6d234a1de\") " Jan 22 12:42:23 crc kubenswrapper[5120]: I0122 12:42:23.775237 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/008de9a1-3447-4f73-ab0e-f1b6d234a1de-utilities\") pod \"008de9a1-3447-4f73-ab0e-f1b6d234a1de\" (UID: \"008de9a1-3447-4f73-ab0e-f1b6d234a1de\") " Jan 22 12:42:23 crc kubenswrapper[5120]: I0122 12:42:23.775321 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/008de9a1-3447-4f73-ab0e-f1b6d234a1de-catalog-content\") pod \"008de9a1-3447-4f73-ab0e-f1b6d234a1de\" (UID: \"008de9a1-3447-4f73-ab0e-f1b6d234a1de\") " Jan 22 12:42:23 crc kubenswrapper[5120]: I0122 12:42:23.778167 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/008de9a1-3447-4f73-ab0e-f1b6d234a1de-utilities" (OuterVolumeSpecName: "utilities") pod "008de9a1-3447-4f73-ab0e-f1b6d234a1de" (UID: "008de9a1-3447-4f73-ab0e-f1b6d234a1de"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:42:23 crc kubenswrapper[5120]: I0122 12:42:23.787658 5120 generic.go:358] "Generic (PLEG): container finished" podID="008de9a1-3447-4f73-ab0e-f1b6d234a1de" containerID="9f06a2a3fb279baa3556c568717d0d3771fc8977f50818333dc9c9257ea5db63" exitCode=0 Jan 22 12:42:23 crc kubenswrapper[5120]: I0122 12:42:23.787833 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s2r5j" event={"ID":"008de9a1-3447-4f73-ab0e-f1b6d234a1de","Type":"ContainerDied","Data":"9f06a2a3fb279baa3556c568717d0d3771fc8977f50818333dc9c9257ea5db63"} Jan 22 12:42:23 crc kubenswrapper[5120]: I0122 12:42:23.787914 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s2r5j" event={"ID":"008de9a1-3447-4f73-ab0e-f1b6d234a1de","Type":"ContainerDied","Data":"274716f9fe261a0e6693c3f5dc6652a5f7b6c5afa3fae9c6eb706d245a939590"} Jan 22 12:42:23 crc kubenswrapper[5120]: I0122 12:42:23.787947 5120 scope.go:117] "RemoveContainer" containerID="9f06a2a3fb279baa3556c568717d0d3771fc8977f50818333dc9c9257ea5db63" Jan 22 12:42:23 crc kubenswrapper[5120]: I0122 12:42:23.788288 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s2r5j" Jan 22 12:42:23 crc kubenswrapper[5120]: I0122 12:42:23.789315 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/008de9a1-3447-4f73-ab0e-f1b6d234a1de-kube-api-access-25l4h" (OuterVolumeSpecName: "kube-api-access-25l4h") pod "008de9a1-3447-4f73-ab0e-f1b6d234a1de" (UID: "008de9a1-3447-4f73-ab0e-f1b6d234a1de"). InnerVolumeSpecName "kube-api-access-25l4h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:42:23 crc kubenswrapper[5120]: I0122 12:42:23.837869 5120 scope.go:117] "RemoveContainer" containerID="fc64e890e47308fa48da847f52304ded8766d2277cb7c3f4eab838179ed516b9" Jan 22 12:42:23 crc kubenswrapper[5120]: I0122 12:42:23.865048 5120 scope.go:117] "RemoveContainer" containerID="654ca3a909be53269e25fca9d091b52845269cbeb2a88b176cb8a795b00ca727" Jan 22 12:42:23 crc kubenswrapper[5120]: I0122 12:42:23.877151 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-25l4h\" (UniqueName: \"kubernetes.io/projected/008de9a1-3447-4f73-ab0e-f1b6d234a1de-kube-api-access-25l4h\") on node \"crc\" DevicePath \"\"" Jan 22 12:42:23 crc kubenswrapper[5120]: I0122 12:42:23.877186 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/008de9a1-3447-4f73-ab0e-f1b6d234a1de-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 12:42:23 crc kubenswrapper[5120]: I0122 12:42:23.887242 5120 scope.go:117] "RemoveContainer" containerID="9f06a2a3fb279baa3556c568717d0d3771fc8977f50818333dc9c9257ea5db63" Jan 22 12:42:23 crc kubenswrapper[5120]: E0122 12:42:23.888048 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f06a2a3fb279baa3556c568717d0d3771fc8977f50818333dc9c9257ea5db63\": container with ID starting with 9f06a2a3fb279baa3556c568717d0d3771fc8977f50818333dc9c9257ea5db63 not found: ID does not exist" containerID="9f06a2a3fb279baa3556c568717d0d3771fc8977f50818333dc9c9257ea5db63" Jan 22 12:42:23 crc kubenswrapper[5120]: I0122 12:42:23.888105 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f06a2a3fb279baa3556c568717d0d3771fc8977f50818333dc9c9257ea5db63"} err="failed to get container status \"9f06a2a3fb279baa3556c568717d0d3771fc8977f50818333dc9c9257ea5db63\": rpc error: code = NotFound desc = could not find container \"9f06a2a3fb279baa3556c568717d0d3771fc8977f50818333dc9c9257ea5db63\": container with ID starting with 9f06a2a3fb279baa3556c568717d0d3771fc8977f50818333dc9c9257ea5db63 not found: ID does not exist" Jan 22 12:42:23 crc kubenswrapper[5120]: I0122 12:42:23.888174 5120 scope.go:117] "RemoveContainer" containerID="fc64e890e47308fa48da847f52304ded8766d2277cb7c3f4eab838179ed516b9" Jan 22 12:42:23 crc kubenswrapper[5120]: E0122 12:42:23.888605 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc64e890e47308fa48da847f52304ded8766d2277cb7c3f4eab838179ed516b9\": container with ID starting with fc64e890e47308fa48da847f52304ded8766d2277cb7c3f4eab838179ed516b9 not found: ID does not exist" containerID="fc64e890e47308fa48da847f52304ded8766d2277cb7c3f4eab838179ed516b9" Jan 22 12:42:23 crc kubenswrapper[5120]: I0122 12:42:23.888764 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc64e890e47308fa48da847f52304ded8766d2277cb7c3f4eab838179ed516b9"} err="failed to get container status \"fc64e890e47308fa48da847f52304ded8766d2277cb7c3f4eab838179ed516b9\": rpc error: code = NotFound desc = could not find container \"fc64e890e47308fa48da847f52304ded8766d2277cb7c3f4eab838179ed516b9\": container with ID starting with fc64e890e47308fa48da847f52304ded8766d2277cb7c3f4eab838179ed516b9 not found: ID does not exist" Jan 22 12:42:23 crc kubenswrapper[5120]: I0122 12:42:23.888943 5120 scope.go:117] "RemoveContainer" containerID="654ca3a909be53269e25fca9d091b52845269cbeb2a88b176cb8a795b00ca727" Jan 22 12:42:23 crc kubenswrapper[5120]: E0122 12:42:23.889384 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"654ca3a909be53269e25fca9d091b52845269cbeb2a88b176cb8a795b00ca727\": container with ID starting with 654ca3a909be53269e25fca9d091b52845269cbeb2a88b176cb8a795b00ca727 not found: ID does not exist" containerID="654ca3a909be53269e25fca9d091b52845269cbeb2a88b176cb8a795b00ca727" Jan 22 12:42:23 crc kubenswrapper[5120]: I0122 12:42:23.889413 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"654ca3a909be53269e25fca9d091b52845269cbeb2a88b176cb8a795b00ca727"} err="failed to get container status \"654ca3a909be53269e25fca9d091b52845269cbeb2a88b176cb8a795b00ca727\": rpc error: code = NotFound desc = could not find container \"654ca3a909be53269e25fca9d091b52845269cbeb2a88b176cb8a795b00ca727\": container with ID starting with 654ca3a909be53269e25fca9d091b52845269cbeb2a88b176cb8a795b00ca727 not found: ID does not exist" Jan 22 12:42:24 crc kubenswrapper[5120]: I0122 12:42:24.825362 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/008de9a1-3447-4f73-ab0e-f1b6d234a1de-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "008de9a1-3447-4f73-ab0e-f1b6d234a1de" (UID: "008de9a1-3447-4f73-ab0e-f1b6d234a1de"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:42:24 crc kubenswrapper[5120]: I0122 12:42:24.892710 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/008de9a1-3447-4f73-ab0e-f1b6d234a1de-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 12:42:25 crc kubenswrapper[5120]: I0122 12:42:25.031781 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s2r5j"] Jan 22 12:42:25 crc kubenswrapper[5120]: I0122 12:42:25.038390 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-s2r5j"] Jan 22 12:42:25 crc kubenswrapper[5120]: I0122 12:42:25.583504 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="008de9a1-3447-4f73-ab0e-f1b6d234a1de" path="/var/lib/kubelet/pods/008de9a1-3447-4f73-ab0e-f1b6d234a1de/volumes" Jan 22 12:42:35 crc kubenswrapper[5120]: I0122 12:42:35.582856 5120 scope.go:117] "RemoveContainer" containerID="cc40d4bcc65892547f86eaafc1dc9dcde42f467dc0cf6f78c66127d13693b626" Jan 22 12:42:35 crc kubenswrapper[5120]: E0122 12:42:35.584073 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:42:48 crc kubenswrapper[5120]: I0122 12:42:48.571602 5120 scope.go:117] "RemoveContainer" containerID="cc40d4bcc65892547f86eaafc1dc9dcde42f467dc0cf6f78c66127d13693b626" Jan 22 12:42:48 crc kubenswrapper[5120]: E0122 12:42:48.572632 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:42:52 crc kubenswrapper[5120]: I0122 12:42:52.068521 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4lzht_67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087/kube-multus/0.log" Jan 22 12:42:52 crc kubenswrapper[5120]: I0122 12:42:52.079125 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4lzht_67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087/kube-multus/0.log" Jan 22 12:42:52 crc kubenswrapper[5120]: I0122 12:42:52.083321 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 22 12:42:52 crc kubenswrapper[5120]: I0122 12:42:52.093384 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 22 12:43:02 crc kubenswrapper[5120]: I0122 12:43:02.571348 5120 scope.go:117] "RemoveContainer" containerID="cc40d4bcc65892547f86eaafc1dc9dcde42f467dc0cf6f78c66127d13693b626" Jan 22 12:43:02 crc kubenswrapper[5120]: E0122 12:43:02.572400 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:43:03 crc kubenswrapper[5120]: I0122 12:43:03.938016 5120 scope.go:117] "RemoveContainer" containerID="91b445bc688764113fdba4792727a51c31d4ee1ea49e151d6ba316bfc799e5a0" Jan 22 12:43:17 crc kubenswrapper[5120]: I0122 12:43:17.573608 5120 scope.go:117] "RemoveContainer" containerID="cc40d4bcc65892547f86eaafc1dc9dcde42f467dc0cf6f78c66127d13693b626" Jan 22 12:43:17 crc kubenswrapper[5120]: E0122 12:43:17.574868 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:43:25 crc kubenswrapper[5120]: I0122 12:43:25.357898 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rp4g5"] Jan 22 12:43:25 crc kubenswrapper[5120]: I0122 12:43:25.359777 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="008de9a1-3447-4f73-ab0e-f1b6d234a1de" containerName="registry-server" Jan 22 12:43:25 crc kubenswrapper[5120]: I0122 12:43:25.359803 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="008de9a1-3447-4f73-ab0e-f1b6d234a1de" containerName="registry-server" Jan 22 12:43:25 crc kubenswrapper[5120]: I0122 12:43:25.359840 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="008de9a1-3447-4f73-ab0e-f1b6d234a1de" containerName="extract-utilities" Jan 22 12:43:25 crc kubenswrapper[5120]: I0122 12:43:25.359853 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="008de9a1-3447-4f73-ab0e-f1b6d234a1de" containerName="extract-utilities" Jan 22 12:43:25 crc kubenswrapper[5120]: I0122 12:43:25.359926 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="008de9a1-3447-4f73-ab0e-f1b6d234a1de" containerName="extract-content" Jan 22 12:43:25 crc kubenswrapper[5120]: I0122 12:43:25.359938 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="008de9a1-3447-4f73-ab0e-f1b6d234a1de" containerName="extract-content" Jan 22 12:43:25 crc kubenswrapper[5120]: I0122 12:43:25.360183 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="008de9a1-3447-4f73-ab0e-f1b6d234a1de" containerName="registry-server" Jan 22 12:43:25 crc kubenswrapper[5120]: I0122 12:43:25.366600 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rp4g5" Jan 22 12:43:25 crc kubenswrapper[5120]: I0122 12:43:25.378118 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rp4g5"] Jan 22 12:43:25 crc kubenswrapper[5120]: I0122 12:43:25.475585 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nbfw\" (UniqueName: \"kubernetes.io/projected/83f88177-dcfc-4ca5-bd2e-e35e59f4ff10-kube-api-access-2nbfw\") pod \"certified-operators-rp4g5\" (UID: \"83f88177-dcfc-4ca5-bd2e-e35e59f4ff10\") " pod="openshift-marketplace/certified-operators-rp4g5" Jan 22 12:43:25 crc kubenswrapper[5120]: I0122 12:43:25.475688 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83f88177-dcfc-4ca5-bd2e-e35e59f4ff10-utilities\") pod \"certified-operators-rp4g5\" (UID: \"83f88177-dcfc-4ca5-bd2e-e35e59f4ff10\") " pod="openshift-marketplace/certified-operators-rp4g5" Jan 22 12:43:25 crc kubenswrapper[5120]: I0122 12:43:25.475743 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83f88177-dcfc-4ca5-bd2e-e35e59f4ff10-catalog-content\") pod \"certified-operators-rp4g5\" (UID: \"83f88177-dcfc-4ca5-bd2e-e35e59f4ff10\") " pod="openshift-marketplace/certified-operators-rp4g5" Jan 22 12:43:25 crc kubenswrapper[5120]: I0122 12:43:25.579519 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2nbfw\" (UniqueName: \"kubernetes.io/projected/83f88177-dcfc-4ca5-bd2e-e35e59f4ff10-kube-api-access-2nbfw\") pod \"certified-operators-rp4g5\" (UID: \"83f88177-dcfc-4ca5-bd2e-e35e59f4ff10\") " pod="openshift-marketplace/certified-operators-rp4g5" Jan 22 12:43:25 crc kubenswrapper[5120]: I0122 12:43:25.579605 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83f88177-dcfc-4ca5-bd2e-e35e59f4ff10-utilities\") pod \"certified-operators-rp4g5\" (UID: \"83f88177-dcfc-4ca5-bd2e-e35e59f4ff10\") " pod="openshift-marketplace/certified-operators-rp4g5" Jan 22 12:43:25 crc kubenswrapper[5120]: I0122 12:43:25.579639 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83f88177-dcfc-4ca5-bd2e-e35e59f4ff10-catalog-content\") pod \"certified-operators-rp4g5\" (UID: \"83f88177-dcfc-4ca5-bd2e-e35e59f4ff10\") " pod="openshift-marketplace/certified-operators-rp4g5" Jan 22 12:43:25 crc kubenswrapper[5120]: I0122 12:43:25.580226 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83f88177-dcfc-4ca5-bd2e-e35e59f4ff10-utilities\") pod \"certified-operators-rp4g5\" (UID: \"83f88177-dcfc-4ca5-bd2e-e35e59f4ff10\") " pod="openshift-marketplace/certified-operators-rp4g5" Jan 22 12:43:25 crc kubenswrapper[5120]: I0122 12:43:25.581731 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83f88177-dcfc-4ca5-bd2e-e35e59f4ff10-catalog-content\") pod \"certified-operators-rp4g5\" (UID: \"83f88177-dcfc-4ca5-bd2e-e35e59f4ff10\") " pod="openshift-marketplace/certified-operators-rp4g5" Jan 22 12:43:25 crc kubenswrapper[5120]: I0122 12:43:25.619824 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nbfw\" (UniqueName: \"kubernetes.io/projected/83f88177-dcfc-4ca5-bd2e-e35e59f4ff10-kube-api-access-2nbfw\") pod \"certified-operators-rp4g5\" (UID: \"83f88177-dcfc-4ca5-bd2e-e35e59f4ff10\") " pod="openshift-marketplace/certified-operators-rp4g5" Jan 22 12:43:25 crc kubenswrapper[5120]: I0122 12:43:25.715257 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rp4g5" Jan 22 12:43:26 crc kubenswrapper[5120]: I0122 12:43:26.168059 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rp4g5"] Jan 22 12:43:26 crc kubenswrapper[5120]: I0122 12:43:26.168898 5120 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 12:43:26 crc kubenswrapper[5120]: I0122 12:43:26.417267 5120 generic.go:358] "Generic (PLEG): container finished" podID="83f88177-dcfc-4ca5-bd2e-e35e59f4ff10" containerID="8438862cfd80a291a8ce8d21963ab85a62a3192253e9207c21bfb82f7e78df12" exitCode=0 Jan 22 12:43:26 crc kubenswrapper[5120]: I0122 12:43:26.417345 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rp4g5" event={"ID":"83f88177-dcfc-4ca5-bd2e-e35e59f4ff10","Type":"ContainerDied","Data":"8438862cfd80a291a8ce8d21963ab85a62a3192253e9207c21bfb82f7e78df12"} Jan 22 12:43:26 crc kubenswrapper[5120]: I0122 12:43:26.417416 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rp4g5" event={"ID":"83f88177-dcfc-4ca5-bd2e-e35e59f4ff10","Type":"ContainerStarted","Data":"79b9ed715683bd1289c5364bc8a7b9157725298506287b161a5bb30a388ac7a4"} Jan 22 12:43:27 crc kubenswrapper[5120]: I0122 12:43:27.428101 5120 generic.go:358] "Generic (PLEG): container finished" podID="83f88177-dcfc-4ca5-bd2e-e35e59f4ff10" containerID="ee52f0be235791cdfb04c7d77af1b138bf274fd830340153c8f962eccee34da4" exitCode=0 Jan 22 12:43:27 crc kubenswrapper[5120]: I0122 12:43:27.428534 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rp4g5" event={"ID":"83f88177-dcfc-4ca5-bd2e-e35e59f4ff10","Type":"ContainerDied","Data":"ee52f0be235791cdfb04c7d77af1b138bf274fd830340153c8f962eccee34da4"} Jan 22 12:43:28 crc kubenswrapper[5120]: I0122 12:43:28.442882 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rp4g5" event={"ID":"83f88177-dcfc-4ca5-bd2e-e35e59f4ff10","Type":"ContainerStarted","Data":"47afaf343a8e57a2141b4fca7f97fbb2810bf0c2eee6c99703640a2db6eb664b"} Jan 22 12:43:28 crc kubenswrapper[5120]: I0122 12:43:28.571771 5120 scope.go:117] "RemoveContainer" containerID="cc40d4bcc65892547f86eaafc1dc9dcde42f467dc0cf6f78c66127d13693b626" Jan 22 12:43:28 crc kubenswrapper[5120]: E0122 12:43:28.572011 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:43:35 crc kubenswrapper[5120]: I0122 12:43:35.716263 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rp4g5" Jan 22 12:43:35 crc kubenswrapper[5120]: I0122 12:43:35.718814 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-rp4g5" Jan 22 12:43:35 crc kubenswrapper[5120]: I0122 12:43:35.772832 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rp4g5" Jan 22 12:43:35 crc kubenswrapper[5120]: I0122 12:43:35.809088 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rp4g5" podStartSLOduration=10.271308444 podStartE2EDuration="10.80905739s" podCreationTimestamp="2026-01-22 12:43:25 +0000 UTC" firstStartedPulling="2026-01-22 12:43:26.418695913 +0000 UTC m=+3341.162644274" lastFinishedPulling="2026-01-22 12:43:26.956444839 +0000 UTC m=+3341.700393220" observedRunningTime="2026-01-22 12:43:28.464246881 +0000 UTC m=+3343.208195252" watchObservedRunningTime="2026-01-22 12:43:35.80905739 +0000 UTC m=+3350.553005761" Jan 22 12:43:36 crc kubenswrapper[5120]: I0122 12:43:36.699247 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rp4g5" Jan 22 12:43:36 crc kubenswrapper[5120]: I0122 12:43:36.752380 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rp4g5"] Jan 22 12:43:38 crc kubenswrapper[5120]: I0122 12:43:38.677050 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rp4g5" podUID="83f88177-dcfc-4ca5-bd2e-e35e59f4ff10" containerName="registry-server" containerID="cri-o://47afaf343a8e57a2141b4fca7f97fbb2810bf0c2eee6c99703640a2db6eb664b" gracePeriod=2 Jan 22 12:43:39 crc kubenswrapper[5120]: I0122 12:43:39.691182 5120 generic.go:358] "Generic (PLEG): container finished" podID="83f88177-dcfc-4ca5-bd2e-e35e59f4ff10" containerID="47afaf343a8e57a2141b4fca7f97fbb2810bf0c2eee6c99703640a2db6eb664b" exitCode=0 Jan 22 12:43:39 crc kubenswrapper[5120]: I0122 12:43:39.691242 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rp4g5" event={"ID":"83f88177-dcfc-4ca5-bd2e-e35e59f4ff10","Type":"ContainerDied","Data":"47afaf343a8e57a2141b4fca7f97fbb2810bf0c2eee6c99703640a2db6eb664b"} Jan 22 12:43:39 crc kubenswrapper[5120]: I0122 12:43:39.691794 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rp4g5" event={"ID":"83f88177-dcfc-4ca5-bd2e-e35e59f4ff10","Type":"ContainerDied","Data":"79b9ed715683bd1289c5364bc8a7b9157725298506287b161a5bb30a388ac7a4"} Jan 22 12:43:39 crc kubenswrapper[5120]: I0122 12:43:39.691887 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79b9ed715683bd1289c5364bc8a7b9157725298506287b161a5bb30a388ac7a4" Jan 22 12:43:39 crc kubenswrapper[5120]: I0122 12:43:39.696241 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rp4g5" Jan 22 12:43:39 crc kubenswrapper[5120]: I0122 12:43:39.749891 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2nbfw\" (UniqueName: \"kubernetes.io/projected/83f88177-dcfc-4ca5-bd2e-e35e59f4ff10-kube-api-access-2nbfw\") pod \"83f88177-dcfc-4ca5-bd2e-e35e59f4ff10\" (UID: \"83f88177-dcfc-4ca5-bd2e-e35e59f4ff10\") " Jan 22 12:43:39 crc kubenswrapper[5120]: I0122 12:43:39.749996 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83f88177-dcfc-4ca5-bd2e-e35e59f4ff10-utilities\") pod \"83f88177-dcfc-4ca5-bd2e-e35e59f4ff10\" (UID: \"83f88177-dcfc-4ca5-bd2e-e35e59f4ff10\") " Jan 22 12:43:39 crc kubenswrapper[5120]: I0122 12:43:39.750046 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83f88177-dcfc-4ca5-bd2e-e35e59f4ff10-catalog-content\") pod \"83f88177-dcfc-4ca5-bd2e-e35e59f4ff10\" (UID: \"83f88177-dcfc-4ca5-bd2e-e35e59f4ff10\") " Jan 22 12:43:39 crc kubenswrapper[5120]: I0122 12:43:39.751482 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83f88177-dcfc-4ca5-bd2e-e35e59f4ff10-utilities" (OuterVolumeSpecName: "utilities") pod "83f88177-dcfc-4ca5-bd2e-e35e59f4ff10" (UID: "83f88177-dcfc-4ca5-bd2e-e35e59f4ff10"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:43:39 crc kubenswrapper[5120]: I0122 12:43:39.757710 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83f88177-dcfc-4ca5-bd2e-e35e59f4ff10-kube-api-access-2nbfw" (OuterVolumeSpecName: "kube-api-access-2nbfw") pod "83f88177-dcfc-4ca5-bd2e-e35e59f4ff10" (UID: "83f88177-dcfc-4ca5-bd2e-e35e59f4ff10"). InnerVolumeSpecName "kube-api-access-2nbfw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:43:39 crc kubenswrapper[5120]: I0122 12:43:39.798177 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83f88177-dcfc-4ca5-bd2e-e35e59f4ff10-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "83f88177-dcfc-4ca5-bd2e-e35e59f4ff10" (UID: "83f88177-dcfc-4ca5-bd2e-e35e59f4ff10"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:43:39 crc kubenswrapper[5120]: I0122 12:43:39.851982 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2nbfw\" (UniqueName: \"kubernetes.io/projected/83f88177-dcfc-4ca5-bd2e-e35e59f4ff10-kube-api-access-2nbfw\") on node \"crc\" DevicePath \"\"" Jan 22 12:43:39 crc kubenswrapper[5120]: I0122 12:43:39.852023 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83f88177-dcfc-4ca5-bd2e-e35e59f4ff10-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 12:43:39 crc kubenswrapper[5120]: I0122 12:43:39.852034 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83f88177-dcfc-4ca5-bd2e-e35e59f4ff10-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 12:43:40 crc kubenswrapper[5120]: I0122 12:43:40.699576 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rp4g5" Jan 22 12:43:40 crc kubenswrapper[5120]: I0122 12:43:40.749240 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rp4g5"] Jan 22 12:43:40 crc kubenswrapper[5120]: I0122 12:43:40.762098 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rp4g5"] Jan 22 12:43:41 crc kubenswrapper[5120]: I0122 12:43:41.589403 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83f88177-dcfc-4ca5-bd2e-e35e59f4ff10" path="/var/lib/kubelet/pods/83f88177-dcfc-4ca5-bd2e-e35e59f4ff10/volumes" Jan 22 12:43:42 crc kubenswrapper[5120]: I0122 12:43:42.573349 5120 scope.go:117] "RemoveContainer" containerID="cc40d4bcc65892547f86eaafc1dc9dcde42f467dc0cf6f78c66127d13693b626" Jan 22 12:43:42 crc kubenswrapper[5120]: E0122 12:43:42.573951 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:43:56 crc kubenswrapper[5120]: I0122 12:43:56.572155 5120 scope.go:117] "RemoveContainer" containerID="cc40d4bcc65892547f86eaafc1dc9dcde42f467dc0cf6f78c66127d13693b626" Jan 22 12:43:56 crc kubenswrapper[5120]: E0122 12:43:56.573277 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:44:00 crc kubenswrapper[5120]: I0122 12:44:00.193511 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29484764-lssmg"] Jan 22 12:44:00 crc kubenswrapper[5120]: I0122 12:44:00.195464 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="83f88177-dcfc-4ca5-bd2e-e35e59f4ff10" containerName="extract-content" Jan 22 12:44:00 crc kubenswrapper[5120]: I0122 12:44:00.195488 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="83f88177-dcfc-4ca5-bd2e-e35e59f4ff10" containerName="extract-content" Jan 22 12:44:00 crc kubenswrapper[5120]: I0122 12:44:00.195529 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="83f88177-dcfc-4ca5-bd2e-e35e59f4ff10" containerName="registry-server" Jan 22 12:44:00 crc kubenswrapper[5120]: I0122 12:44:00.195539 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="83f88177-dcfc-4ca5-bd2e-e35e59f4ff10" containerName="registry-server" Jan 22 12:44:00 crc kubenswrapper[5120]: I0122 12:44:00.195578 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="83f88177-dcfc-4ca5-bd2e-e35e59f4ff10" containerName="extract-utilities" Jan 22 12:44:00 crc kubenswrapper[5120]: I0122 12:44:00.195590 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="83f88177-dcfc-4ca5-bd2e-e35e59f4ff10" containerName="extract-utilities" Jan 22 12:44:00 crc kubenswrapper[5120]: I0122 12:44:00.195906 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="83f88177-dcfc-4ca5-bd2e-e35e59f4ff10" containerName="registry-server" Jan 22 12:44:00 crc kubenswrapper[5120]: I0122 12:44:00.204241 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484764-lssmg" Jan 22 12:44:00 crc kubenswrapper[5120]: I0122 12:44:00.206485 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484764-lssmg"] Jan 22 12:44:00 crc kubenswrapper[5120]: I0122 12:44:00.207187 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 22 12:44:00 crc kubenswrapper[5120]: I0122 12:44:00.207469 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g2chw\"" Jan 22 12:44:00 crc kubenswrapper[5120]: I0122 12:44:00.208820 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 22 12:44:00 crc kubenswrapper[5120]: I0122 12:44:00.310472 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj4w8\" (UniqueName: \"kubernetes.io/projected/5973be67-1e77-468f-aace-0dc45ba40609-kube-api-access-jj4w8\") pod \"auto-csr-approver-29484764-lssmg\" (UID: \"5973be67-1e77-468f-aace-0dc45ba40609\") " pod="openshift-infra/auto-csr-approver-29484764-lssmg" Jan 22 12:44:00 crc kubenswrapper[5120]: I0122 12:44:00.412512 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jj4w8\" (UniqueName: \"kubernetes.io/projected/5973be67-1e77-468f-aace-0dc45ba40609-kube-api-access-jj4w8\") pod \"auto-csr-approver-29484764-lssmg\" (UID: \"5973be67-1e77-468f-aace-0dc45ba40609\") " pod="openshift-infra/auto-csr-approver-29484764-lssmg" Jan 22 12:44:00 crc kubenswrapper[5120]: I0122 12:44:00.450461 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jj4w8\" (UniqueName: \"kubernetes.io/projected/5973be67-1e77-468f-aace-0dc45ba40609-kube-api-access-jj4w8\") pod \"auto-csr-approver-29484764-lssmg\" (UID: \"5973be67-1e77-468f-aace-0dc45ba40609\") " pod="openshift-infra/auto-csr-approver-29484764-lssmg" Jan 22 12:44:00 crc kubenswrapper[5120]: I0122 12:44:00.521985 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484764-lssmg" Jan 22 12:44:00 crc kubenswrapper[5120]: I0122 12:44:00.998011 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484764-lssmg"] Jan 22 12:44:01 crc kubenswrapper[5120]: W0122 12:44:01.005883 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5973be67_1e77_468f_aace_0dc45ba40609.slice/crio-a4e3daff4aa94c757a090b07ce6c527ebd5639ec489fe94cabdad9fff91409e3 WatchSource:0}: Error finding container a4e3daff4aa94c757a090b07ce6c527ebd5639ec489fe94cabdad9fff91409e3: Status 404 returned error can't find the container with id a4e3daff4aa94c757a090b07ce6c527ebd5639ec489fe94cabdad9fff91409e3 Jan 22 12:44:01 crc kubenswrapper[5120]: I0122 12:44:01.890171 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484764-lssmg" event={"ID":"5973be67-1e77-468f-aace-0dc45ba40609","Type":"ContainerStarted","Data":"a4e3daff4aa94c757a090b07ce6c527ebd5639ec489fe94cabdad9fff91409e3"} Jan 22 12:44:02 crc kubenswrapper[5120]: I0122 12:44:02.897922 5120 generic.go:358] "Generic (PLEG): container finished" podID="5973be67-1e77-468f-aace-0dc45ba40609" containerID="da1b834fe11918b7b503fbd82eb99354219ce8355dd6b17dd9e4af5acf161805" exitCode=0 Jan 22 12:44:02 crc kubenswrapper[5120]: I0122 12:44:02.898294 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484764-lssmg" event={"ID":"5973be67-1e77-468f-aace-0dc45ba40609","Type":"ContainerDied","Data":"da1b834fe11918b7b503fbd82eb99354219ce8355dd6b17dd9e4af5acf161805"} Jan 22 12:44:04 crc kubenswrapper[5120]: I0122 12:44:04.269029 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484764-lssmg" Jan 22 12:44:04 crc kubenswrapper[5120]: I0122 12:44:04.391552 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jj4w8\" (UniqueName: \"kubernetes.io/projected/5973be67-1e77-468f-aace-0dc45ba40609-kube-api-access-jj4w8\") pod \"5973be67-1e77-468f-aace-0dc45ba40609\" (UID: \"5973be67-1e77-468f-aace-0dc45ba40609\") " Jan 22 12:44:04 crc kubenswrapper[5120]: I0122 12:44:04.399190 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5973be67-1e77-468f-aace-0dc45ba40609-kube-api-access-jj4w8" (OuterVolumeSpecName: "kube-api-access-jj4w8") pod "5973be67-1e77-468f-aace-0dc45ba40609" (UID: "5973be67-1e77-468f-aace-0dc45ba40609"). InnerVolumeSpecName "kube-api-access-jj4w8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:44:04 crc kubenswrapper[5120]: I0122 12:44:04.494138 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jj4w8\" (UniqueName: \"kubernetes.io/projected/5973be67-1e77-468f-aace-0dc45ba40609-kube-api-access-jj4w8\") on node \"crc\" DevicePath \"\"" Jan 22 12:44:04 crc kubenswrapper[5120]: I0122 12:44:04.919286 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484764-lssmg" Jan 22 12:44:04 crc kubenswrapper[5120]: I0122 12:44:04.919313 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484764-lssmg" event={"ID":"5973be67-1e77-468f-aace-0dc45ba40609","Type":"ContainerDied","Data":"a4e3daff4aa94c757a090b07ce6c527ebd5639ec489fe94cabdad9fff91409e3"} Jan 22 12:44:04 crc kubenswrapper[5120]: I0122 12:44:04.919371 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4e3daff4aa94c757a090b07ce6c527ebd5639ec489fe94cabdad9fff91409e3" Jan 22 12:44:05 crc kubenswrapper[5120]: I0122 12:44:05.328750 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29484758-hjfwt"] Jan 22 12:44:05 crc kubenswrapper[5120]: I0122 12:44:05.334029 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29484758-hjfwt"] Jan 22 12:44:05 crc kubenswrapper[5120]: I0122 12:44:05.587802 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96479bdf-524d-44cf-84b0-0be4a402a317" path="/var/lib/kubelet/pods/96479bdf-524d-44cf-84b0-0be4a402a317/volumes" Jan 22 12:44:11 crc kubenswrapper[5120]: I0122 12:44:11.572563 5120 scope.go:117] "RemoveContainer" containerID="cc40d4bcc65892547f86eaafc1dc9dcde42f467dc0cf6f78c66127d13693b626" Jan 22 12:44:11 crc kubenswrapper[5120]: E0122 12:44:11.573878 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:44:23 crc kubenswrapper[5120]: I0122 12:44:23.571755 5120 scope.go:117] "RemoveContainer" containerID="cc40d4bcc65892547f86eaafc1dc9dcde42f467dc0cf6f78c66127d13693b626" Jan 22 12:44:23 crc kubenswrapper[5120]: E0122 12:44:23.574492 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:44:26 crc kubenswrapper[5120]: I0122 12:44:26.315607 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2k92n"] Jan 22 12:44:26 crc kubenswrapper[5120]: I0122 12:44:26.317671 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5973be67-1e77-468f-aace-0dc45ba40609" containerName="oc" Jan 22 12:44:26 crc kubenswrapper[5120]: I0122 12:44:26.317798 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="5973be67-1e77-468f-aace-0dc45ba40609" containerName="oc" Jan 22 12:44:26 crc kubenswrapper[5120]: I0122 12:44:26.318112 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="5973be67-1e77-468f-aace-0dc45ba40609" containerName="oc" Jan 22 12:44:26 crc kubenswrapper[5120]: I0122 12:44:26.354353 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2k92n"] Jan 22 12:44:26 crc kubenswrapper[5120]: I0122 12:44:26.354764 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2k92n" Jan 22 12:44:26 crc kubenswrapper[5120]: I0122 12:44:26.491801 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36231898-a2c8-4be7-bd5b-c69ebfb5d706-catalog-content\") pod \"community-operators-2k92n\" (UID: \"36231898-a2c8-4be7-bd5b-c69ebfb5d706\") " pod="openshift-marketplace/community-operators-2k92n" Jan 22 12:44:26 crc kubenswrapper[5120]: I0122 12:44:26.492117 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb8wn\" (UniqueName: \"kubernetes.io/projected/36231898-a2c8-4be7-bd5b-c69ebfb5d706-kube-api-access-wb8wn\") pod \"community-operators-2k92n\" (UID: \"36231898-a2c8-4be7-bd5b-c69ebfb5d706\") " pod="openshift-marketplace/community-operators-2k92n" Jan 22 12:44:26 crc kubenswrapper[5120]: I0122 12:44:26.492231 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36231898-a2c8-4be7-bd5b-c69ebfb5d706-utilities\") pod \"community-operators-2k92n\" (UID: \"36231898-a2c8-4be7-bd5b-c69ebfb5d706\") " pod="openshift-marketplace/community-operators-2k92n" Jan 22 12:44:26 crc kubenswrapper[5120]: I0122 12:44:26.593629 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36231898-a2c8-4be7-bd5b-c69ebfb5d706-catalog-content\") pod \"community-operators-2k92n\" (UID: \"36231898-a2c8-4be7-bd5b-c69ebfb5d706\") " pod="openshift-marketplace/community-operators-2k92n" Jan 22 12:44:26 crc kubenswrapper[5120]: I0122 12:44:26.593755 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wb8wn\" (UniqueName: \"kubernetes.io/projected/36231898-a2c8-4be7-bd5b-c69ebfb5d706-kube-api-access-wb8wn\") pod \"community-operators-2k92n\" (UID: \"36231898-a2c8-4be7-bd5b-c69ebfb5d706\") " pod="openshift-marketplace/community-operators-2k92n" Jan 22 12:44:26 crc kubenswrapper[5120]: I0122 12:44:26.593800 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36231898-a2c8-4be7-bd5b-c69ebfb5d706-utilities\") pod \"community-operators-2k92n\" (UID: \"36231898-a2c8-4be7-bd5b-c69ebfb5d706\") " pod="openshift-marketplace/community-operators-2k92n" Jan 22 12:44:26 crc kubenswrapper[5120]: I0122 12:44:26.594258 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36231898-a2c8-4be7-bd5b-c69ebfb5d706-catalog-content\") pod \"community-operators-2k92n\" (UID: \"36231898-a2c8-4be7-bd5b-c69ebfb5d706\") " pod="openshift-marketplace/community-operators-2k92n" Jan 22 12:44:26 crc kubenswrapper[5120]: I0122 12:44:26.594288 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36231898-a2c8-4be7-bd5b-c69ebfb5d706-utilities\") pod \"community-operators-2k92n\" (UID: \"36231898-a2c8-4be7-bd5b-c69ebfb5d706\") " pod="openshift-marketplace/community-operators-2k92n" Jan 22 12:44:26 crc kubenswrapper[5120]: I0122 12:44:26.614249 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb8wn\" (UniqueName: \"kubernetes.io/projected/36231898-a2c8-4be7-bd5b-c69ebfb5d706-kube-api-access-wb8wn\") pod \"community-operators-2k92n\" (UID: \"36231898-a2c8-4be7-bd5b-c69ebfb5d706\") " pod="openshift-marketplace/community-operators-2k92n" Jan 22 12:44:26 crc kubenswrapper[5120]: I0122 12:44:26.684185 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2k92n" Jan 22 12:44:26 crc kubenswrapper[5120]: I0122 12:44:26.990377 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2k92n"] Jan 22 12:44:27 crc kubenswrapper[5120]: I0122 12:44:27.136120 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2k92n" event={"ID":"36231898-a2c8-4be7-bd5b-c69ebfb5d706","Type":"ContainerStarted","Data":"1a6185c4923561c56fb91a67e997cf57542fd8b5fcf6e9a8a76e540b46ee71dc"} Jan 22 12:44:28 crc kubenswrapper[5120]: I0122 12:44:28.147436 5120 generic.go:358] "Generic (PLEG): container finished" podID="36231898-a2c8-4be7-bd5b-c69ebfb5d706" containerID="df894cbd14d111aa39e607965a1b6af460e8994f5050da70ec1fedf59572b128" exitCode=0 Jan 22 12:44:28 crc kubenswrapper[5120]: I0122 12:44:28.147520 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2k92n" event={"ID":"36231898-a2c8-4be7-bd5b-c69ebfb5d706","Type":"ContainerDied","Data":"df894cbd14d111aa39e607965a1b6af460e8994f5050da70ec1fedf59572b128"} Jan 22 12:44:29 crc kubenswrapper[5120]: I0122 12:44:29.158290 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2k92n" event={"ID":"36231898-a2c8-4be7-bd5b-c69ebfb5d706","Type":"ContainerStarted","Data":"7577cc176c59d8a1b850253c91e79633c88e50d0033b12cbbbe51ac9e566cb87"} Jan 22 12:44:30 crc kubenswrapper[5120]: I0122 12:44:30.170301 5120 generic.go:358] "Generic (PLEG): container finished" podID="36231898-a2c8-4be7-bd5b-c69ebfb5d706" containerID="7577cc176c59d8a1b850253c91e79633c88e50d0033b12cbbbe51ac9e566cb87" exitCode=0 Jan 22 12:44:30 crc kubenswrapper[5120]: I0122 12:44:30.170359 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2k92n" event={"ID":"36231898-a2c8-4be7-bd5b-c69ebfb5d706","Type":"ContainerDied","Data":"7577cc176c59d8a1b850253c91e79633c88e50d0033b12cbbbe51ac9e566cb87"} Jan 22 12:44:31 crc kubenswrapper[5120]: I0122 12:44:31.182507 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2k92n" event={"ID":"36231898-a2c8-4be7-bd5b-c69ebfb5d706","Type":"ContainerStarted","Data":"d71d985c8946c0712b29a8794ea0adff138ddb012d5ad67ecb339e9f3ec13b3d"} Jan 22 12:44:31 crc kubenswrapper[5120]: I0122 12:44:31.214217 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2k92n" podStartSLOduration=4.559514791 podStartE2EDuration="5.214198686s" podCreationTimestamp="2026-01-22 12:44:26 +0000 UTC" firstStartedPulling="2026-01-22 12:44:28.149174033 +0000 UTC m=+3402.893122415" lastFinishedPulling="2026-01-22 12:44:28.803857939 +0000 UTC m=+3403.547806310" observedRunningTime="2026-01-22 12:44:31.20341088 +0000 UTC m=+3405.947359231" watchObservedRunningTime="2026-01-22 12:44:31.214198686 +0000 UTC m=+3405.958147027" Jan 22 12:44:36 crc kubenswrapper[5120]: I0122 12:44:36.684713 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-2k92n" Jan 22 12:44:36 crc kubenswrapper[5120]: I0122 12:44:36.685270 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2k92n" Jan 22 12:44:36 crc kubenswrapper[5120]: I0122 12:44:36.751837 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2k92n" Jan 22 12:44:37 crc kubenswrapper[5120]: I0122 12:44:37.297774 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2k92n" Jan 22 12:44:37 crc kubenswrapper[5120]: I0122 12:44:37.350819 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2k92n"] Jan 22 12:44:37 crc kubenswrapper[5120]: I0122 12:44:37.572396 5120 scope.go:117] "RemoveContainer" containerID="cc40d4bcc65892547f86eaafc1dc9dcde42f467dc0cf6f78c66127d13693b626" Jan 22 12:44:37 crc kubenswrapper[5120]: E0122 12:44:37.572907 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:44:39 crc kubenswrapper[5120]: I0122 12:44:39.254619 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2k92n" podUID="36231898-a2c8-4be7-bd5b-c69ebfb5d706" containerName="registry-server" containerID="cri-o://d71d985c8946c0712b29a8794ea0adff138ddb012d5ad67ecb339e9f3ec13b3d" gracePeriod=2 Jan 22 12:44:40 crc kubenswrapper[5120]: I0122 12:44:40.269762 5120 generic.go:358] "Generic (PLEG): container finished" podID="36231898-a2c8-4be7-bd5b-c69ebfb5d706" containerID="d71d985c8946c0712b29a8794ea0adff138ddb012d5ad67ecb339e9f3ec13b3d" exitCode=0 Jan 22 12:44:40 crc kubenswrapper[5120]: I0122 12:44:40.269874 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2k92n" event={"ID":"36231898-a2c8-4be7-bd5b-c69ebfb5d706","Type":"ContainerDied","Data":"d71d985c8946c0712b29a8794ea0adff138ddb012d5ad67ecb339e9f3ec13b3d"} Jan 22 12:44:40 crc kubenswrapper[5120]: I0122 12:44:40.782336 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2k92n" Jan 22 12:44:40 crc kubenswrapper[5120]: I0122 12:44:40.943814 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36231898-a2c8-4be7-bd5b-c69ebfb5d706-catalog-content\") pod \"36231898-a2c8-4be7-bd5b-c69ebfb5d706\" (UID: \"36231898-a2c8-4be7-bd5b-c69ebfb5d706\") " Jan 22 12:44:40 crc kubenswrapper[5120]: I0122 12:44:40.943947 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wb8wn\" (UniqueName: \"kubernetes.io/projected/36231898-a2c8-4be7-bd5b-c69ebfb5d706-kube-api-access-wb8wn\") pod \"36231898-a2c8-4be7-bd5b-c69ebfb5d706\" (UID: \"36231898-a2c8-4be7-bd5b-c69ebfb5d706\") " Jan 22 12:44:40 crc kubenswrapper[5120]: I0122 12:44:40.944016 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36231898-a2c8-4be7-bd5b-c69ebfb5d706-utilities\") pod \"36231898-a2c8-4be7-bd5b-c69ebfb5d706\" (UID: \"36231898-a2c8-4be7-bd5b-c69ebfb5d706\") " Jan 22 12:44:40 crc kubenswrapper[5120]: I0122 12:44:40.945265 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36231898-a2c8-4be7-bd5b-c69ebfb5d706-utilities" (OuterVolumeSpecName: "utilities") pod "36231898-a2c8-4be7-bd5b-c69ebfb5d706" (UID: "36231898-a2c8-4be7-bd5b-c69ebfb5d706"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:44:40 crc kubenswrapper[5120]: I0122 12:44:40.951309 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36231898-a2c8-4be7-bd5b-c69ebfb5d706-kube-api-access-wb8wn" (OuterVolumeSpecName: "kube-api-access-wb8wn") pod "36231898-a2c8-4be7-bd5b-c69ebfb5d706" (UID: "36231898-a2c8-4be7-bd5b-c69ebfb5d706"). InnerVolumeSpecName "kube-api-access-wb8wn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:44:41 crc kubenswrapper[5120]: I0122 12:44:41.021241 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36231898-a2c8-4be7-bd5b-c69ebfb5d706-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "36231898-a2c8-4be7-bd5b-c69ebfb5d706" (UID: "36231898-a2c8-4be7-bd5b-c69ebfb5d706"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:44:41 crc kubenswrapper[5120]: I0122 12:44:41.045514 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36231898-a2c8-4be7-bd5b-c69ebfb5d706-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 12:44:41 crc kubenswrapper[5120]: I0122 12:44:41.045547 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wb8wn\" (UniqueName: \"kubernetes.io/projected/36231898-a2c8-4be7-bd5b-c69ebfb5d706-kube-api-access-wb8wn\") on node \"crc\" DevicePath \"\"" Jan 22 12:44:41 crc kubenswrapper[5120]: I0122 12:44:41.045559 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36231898-a2c8-4be7-bd5b-c69ebfb5d706-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 12:44:41 crc kubenswrapper[5120]: I0122 12:44:41.284311 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2k92n" event={"ID":"36231898-a2c8-4be7-bd5b-c69ebfb5d706","Type":"ContainerDied","Data":"1a6185c4923561c56fb91a67e997cf57542fd8b5fcf6e9a8a76e540b46ee71dc"} Jan 22 12:44:41 crc kubenswrapper[5120]: I0122 12:44:41.284386 5120 scope.go:117] "RemoveContainer" containerID="d71d985c8946c0712b29a8794ea0adff138ddb012d5ad67ecb339e9f3ec13b3d" Jan 22 12:44:41 crc kubenswrapper[5120]: I0122 12:44:41.284590 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2k92n" Jan 22 12:44:41 crc kubenswrapper[5120]: I0122 12:44:41.320762 5120 scope.go:117] "RemoveContainer" containerID="7577cc176c59d8a1b850253c91e79633c88e50d0033b12cbbbe51ac9e566cb87" Jan 22 12:44:41 crc kubenswrapper[5120]: I0122 12:44:41.337731 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2k92n"] Jan 22 12:44:41 crc kubenswrapper[5120]: I0122 12:44:41.347770 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2k92n"] Jan 22 12:44:41 crc kubenswrapper[5120]: I0122 12:44:41.354862 5120 scope.go:117] "RemoveContainer" containerID="df894cbd14d111aa39e607965a1b6af460e8994f5050da70ec1fedf59572b128" Jan 22 12:44:41 crc kubenswrapper[5120]: I0122 12:44:41.586307 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36231898-a2c8-4be7-bd5b-c69ebfb5d706" path="/var/lib/kubelet/pods/36231898-a2c8-4be7-bd5b-c69ebfb5d706/volumes" Jan 22 12:44:51 crc kubenswrapper[5120]: I0122 12:44:51.573875 5120 scope.go:117] "RemoveContainer" containerID="cc40d4bcc65892547f86eaafc1dc9dcde42f467dc0cf6f78c66127d13693b626" Jan 22 12:44:51 crc kubenswrapper[5120]: E0122 12:44:51.575529 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:45:00 crc kubenswrapper[5120]: I0122 12:45:00.183851 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484765-s285m"] Jan 22 12:45:00 crc kubenswrapper[5120]: I0122 12:45:00.187865 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="36231898-a2c8-4be7-bd5b-c69ebfb5d706" containerName="registry-server" Jan 22 12:45:00 crc kubenswrapper[5120]: I0122 12:45:00.188044 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="36231898-a2c8-4be7-bd5b-c69ebfb5d706" containerName="registry-server" Jan 22 12:45:00 crc kubenswrapper[5120]: I0122 12:45:00.188172 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="36231898-a2c8-4be7-bd5b-c69ebfb5d706" containerName="extract-content" Jan 22 12:45:00 crc kubenswrapper[5120]: I0122 12:45:00.188255 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="36231898-a2c8-4be7-bd5b-c69ebfb5d706" containerName="extract-content" Jan 22 12:45:00 crc kubenswrapper[5120]: I0122 12:45:00.188831 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="36231898-a2c8-4be7-bd5b-c69ebfb5d706" containerName="extract-utilities" Jan 22 12:45:00 crc kubenswrapper[5120]: I0122 12:45:00.189042 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="36231898-a2c8-4be7-bd5b-c69ebfb5d706" containerName="extract-utilities" Jan 22 12:45:00 crc kubenswrapper[5120]: I0122 12:45:00.189356 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="36231898-a2c8-4be7-bd5b-c69ebfb5d706" containerName="registry-server" Jan 22 12:45:00 crc kubenswrapper[5120]: I0122 12:45:00.198285 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484765-s285m"] Jan 22 12:45:00 crc kubenswrapper[5120]: I0122 12:45:00.198535 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484765-s285m" Jan 22 12:45:00 crc kubenswrapper[5120]: I0122 12:45:00.201242 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 22 12:45:00 crc kubenswrapper[5120]: I0122 12:45:00.207928 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 22 12:45:00 crc kubenswrapper[5120]: I0122 12:45:00.277047 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/882b7ca2-9793-49f3-b5e8-883119a96591-secret-volume\") pod \"collect-profiles-29484765-s285m\" (UID: \"882b7ca2-9793-49f3-b5e8-883119a96591\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484765-s285m" Jan 22 12:45:00 crc kubenswrapper[5120]: I0122 12:45:00.277145 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hmgf\" (UniqueName: \"kubernetes.io/projected/882b7ca2-9793-49f3-b5e8-883119a96591-kube-api-access-4hmgf\") pod \"collect-profiles-29484765-s285m\" (UID: \"882b7ca2-9793-49f3-b5e8-883119a96591\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484765-s285m" Jan 22 12:45:00 crc kubenswrapper[5120]: I0122 12:45:00.277458 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/882b7ca2-9793-49f3-b5e8-883119a96591-config-volume\") pod \"collect-profiles-29484765-s285m\" (UID: \"882b7ca2-9793-49f3-b5e8-883119a96591\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484765-s285m" Jan 22 12:45:00 crc kubenswrapper[5120]: I0122 12:45:00.378749 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4hmgf\" (UniqueName: \"kubernetes.io/projected/882b7ca2-9793-49f3-b5e8-883119a96591-kube-api-access-4hmgf\") pod \"collect-profiles-29484765-s285m\" (UID: \"882b7ca2-9793-49f3-b5e8-883119a96591\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484765-s285m" Jan 22 12:45:00 crc kubenswrapper[5120]: I0122 12:45:00.378901 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/882b7ca2-9793-49f3-b5e8-883119a96591-config-volume\") pod \"collect-profiles-29484765-s285m\" (UID: \"882b7ca2-9793-49f3-b5e8-883119a96591\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484765-s285m" Jan 22 12:45:00 crc kubenswrapper[5120]: I0122 12:45:00.379053 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/882b7ca2-9793-49f3-b5e8-883119a96591-secret-volume\") pod \"collect-profiles-29484765-s285m\" (UID: \"882b7ca2-9793-49f3-b5e8-883119a96591\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484765-s285m" Jan 22 12:45:00 crc kubenswrapper[5120]: I0122 12:45:00.380585 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/882b7ca2-9793-49f3-b5e8-883119a96591-config-volume\") pod \"collect-profiles-29484765-s285m\" (UID: \"882b7ca2-9793-49f3-b5e8-883119a96591\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484765-s285m" Jan 22 12:45:00 crc kubenswrapper[5120]: I0122 12:45:00.390661 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/882b7ca2-9793-49f3-b5e8-883119a96591-secret-volume\") pod \"collect-profiles-29484765-s285m\" (UID: \"882b7ca2-9793-49f3-b5e8-883119a96591\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484765-s285m" Jan 22 12:45:00 crc kubenswrapper[5120]: I0122 12:45:00.410752 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hmgf\" (UniqueName: \"kubernetes.io/projected/882b7ca2-9793-49f3-b5e8-883119a96591-kube-api-access-4hmgf\") pod \"collect-profiles-29484765-s285m\" (UID: \"882b7ca2-9793-49f3-b5e8-883119a96591\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484765-s285m" Jan 22 12:45:00 crc kubenswrapper[5120]: I0122 12:45:00.538518 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484765-s285m" Jan 22 12:45:01 crc kubenswrapper[5120]: I0122 12:45:01.009616 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484765-s285m"] Jan 22 12:45:01 crc kubenswrapper[5120]: W0122 12:45:01.019360 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod882b7ca2_9793_49f3_b5e8_883119a96591.slice/crio-731d7ae4d4b54d14bb5f9952899b0ab25b53091a4668c97c1dd873d614957f72 WatchSource:0}: Error finding container 731d7ae4d4b54d14bb5f9952899b0ab25b53091a4668c97c1dd873d614957f72: Status 404 returned error can't find the container with id 731d7ae4d4b54d14bb5f9952899b0ab25b53091a4668c97c1dd873d614957f72 Jan 22 12:45:01 crc kubenswrapper[5120]: I0122 12:45:01.483236 5120 generic.go:358] "Generic (PLEG): container finished" podID="882b7ca2-9793-49f3-b5e8-883119a96591" containerID="500bb78c536c9c94640d7c27f7b87d17493e14dcebcc3f4e10a31b030bc88263" exitCode=0 Jan 22 12:45:01 crc kubenswrapper[5120]: I0122 12:45:01.483423 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484765-s285m" event={"ID":"882b7ca2-9793-49f3-b5e8-883119a96591","Type":"ContainerDied","Data":"500bb78c536c9c94640d7c27f7b87d17493e14dcebcc3f4e10a31b030bc88263"} Jan 22 12:45:01 crc kubenswrapper[5120]: I0122 12:45:01.483749 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484765-s285m" event={"ID":"882b7ca2-9793-49f3-b5e8-883119a96591","Type":"ContainerStarted","Data":"731d7ae4d4b54d14bb5f9952899b0ab25b53091a4668c97c1dd873d614957f72"} Jan 22 12:45:02 crc kubenswrapper[5120]: I0122 12:45:02.807558 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484765-s285m" Jan 22 12:45:02 crc kubenswrapper[5120]: I0122 12:45:02.926275 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/882b7ca2-9793-49f3-b5e8-883119a96591-secret-volume\") pod \"882b7ca2-9793-49f3-b5e8-883119a96591\" (UID: \"882b7ca2-9793-49f3-b5e8-883119a96591\") " Jan 22 12:45:02 crc kubenswrapper[5120]: I0122 12:45:02.927529 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/882b7ca2-9793-49f3-b5e8-883119a96591-config-volume\") pod \"882b7ca2-9793-49f3-b5e8-883119a96591\" (UID: \"882b7ca2-9793-49f3-b5e8-883119a96591\") " Jan 22 12:45:02 crc kubenswrapper[5120]: I0122 12:45:02.927830 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hmgf\" (UniqueName: \"kubernetes.io/projected/882b7ca2-9793-49f3-b5e8-883119a96591-kube-api-access-4hmgf\") pod \"882b7ca2-9793-49f3-b5e8-883119a96591\" (UID: \"882b7ca2-9793-49f3-b5e8-883119a96591\") " Jan 22 12:45:02 crc kubenswrapper[5120]: I0122 12:45:02.931223 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/882b7ca2-9793-49f3-b5e8-883119a96591-config-volume" (OuterVolumeSpecName: "config-volume") pod "882b7ca2-9793-49f3-b5e8-883119a96591" (UID: "882b7ca2-9793-49f3-b5e8-883119a96591"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:45:02 crc kubenswrapper[5120]: I0122 12:45:02.935358 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/882b7ca2-9793-49f3-b5e8-883119a96591-kube-api-access-4hmgf" (OuterVolumeSpecName: "kube-api-access-4hmgf") pod "882b7ca2-9793-49f3-b5e8-883119a96591" (UID: "882b7ca2-9793-49f3-b5e8-883119a96591"). InnerVolumeSpecName "kube-api-access-4hmgf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:45:02 crc kubenswrapper[5120]: I0122 12:45:02.942784 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/882b7ca2-9793-49f3-b5e8-883119a96591-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "882b7ca2-9793-49f3-b5e8-883119a96591" (UID: "882b7ca2-9793-49f3-b5e8-883119a96591"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 12:45:03 crc kubenswrapper[5120]: I0122 12:45:03.031469 5120 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/882b7ca2-9793-49f3-b5e8-883119a96591-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 22 12:45:03 crc kubenswrapper[5120]: I0122 12:45:03.031529 5120 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/882b7ca2-9793-49f3-b5e8-883119a96591-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 12:45:03 crc kubenswrapper[5120]: I0122 12:45:03.031553 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4hmgf\" (UniqueName: \"kubernetes.io/projected/882b7ca2-9793-49f3-b5e8-883119a96591-kube-api-access-4hmgf\") on node \"crc\" DevicePath \"\"" Jan 22 12:45:03 crc kubenswrapper[5120]: I0122 12:45:03.504242 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484765-s285m" event={"ID":"882b7ca2-9793-49f3-b5e8-883119a96591","Type":"ContainerDied","Data":"731d7ae4d4b54d14bb5f9952899b0ab25b53091a4668c97c1dd873d614957f72"} Jan 22 12:45:03 crc kubenswrapper[5120]: I0122 12:45:03.504628 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="731d7ae4d4b54d14bb5f9952899b0ab25b53091a4668c97c1dd873d614957f72" Jan 22 12:45:03 crc kubenswrapper[5120]: I0122 12:45:03.504535 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484765-s285m" Jan 22 12:45:03 crc kubenswrapper[5120]: I0122 12:45:03.878483 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484720-bt5vq"] Jan 22 12:45:03 crc kubenswrapper[5120]: I0122 12:45:03.887845 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484720-bt5vq"] Jan 22 12:45:04 crc kubenswrapper[5120]: I0122 12:45:04.098059 5120 scope.go:117] "RemoveContainer" containerID="7a805c4c05ced5399f3bf914b6a245f885524c5c4ac80c4ac8f87f8faa63c41b" Jan 22 12:45:05 crc kubenswrapper[5120]: I0122 12:45:05.577102 5120 scope.go:117] "RemoveContainer" containerID="cc40d4bcc65892547f86eaafc1dc9dcde42f467dc0cf6f78c66127d13693b626" Jan 22 12:45:05 crc kubenswrapper[5120]: E0122 12:45:05.577563 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:45:05 crc kubenswrapper[5120]: I0122 12:45:05.584944 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d57ca8ee-4b8e-4b45-983a-11332a457cf8" path="/var/lib/kubelet/pods/d57ca8ee-4b8e-4b45-983a-11332a457cf8/volumes" Jan 22 12:45:16 crc kubenswrapper[5120]: I0122 12:45:16.571818 5120 scope.go:117] "RemoveContainer" containerID="cc40d4bcc65892547f86eaafc1dc9dcde42f467dc0cf6f78c66127d13693b626" Jan 22 12:45:16 crc kubenswrapper[5120]: E0122 12:45:16.574474 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:45:28 crc kubenswrapper[5120]: I0122 12:45:28.573476 5120 scope.go:117] "RemoveContainer" containerID="cc40d4bcc65892547f86eaafc1dc9dcde42f467dc0cf6f78c66127d13693b626" Jan 22 12:45:28 crc kubenswrapper[5120]: E0122 12:45:28.574858 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:45:40 crc kubenswrapper[5120]: I0122 12:45:40.572491 5120 scope.go:117] "RemoveContainer" containerID="cc40d4bcc65892547f86eaafc1dc9dcde42f467dc0cf6f78c66127d13693b626" Jan 22 12:45:40 crc kubenswrapper[5120]: E0122 12:45:40.573573 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:45:51 crc kubenswrapper[5120]: I0122 12:45:51.572612 5120 scope.go:117] "RemoveContainer" containerID="cc40d4bcc65892547f86eaafc1dc9dcde42f467dc0cf6f78c66127d13693b626" Jan 22 12:45:51 crc kubenswrapper[5120]: E0122 12:45:51.573427 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:46:00 crc kubenswrapper[5120]: I0122 12:46:00.151063 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29484766-r7mx5"] Jan 22 12:46:00 crc kubenswrapper[5120]: I0122 12:46:00.152630 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="882b7ca2-9793-49f3-b5e8-883119a96591" containerName="collect-profiles" Jan 22 12:46:00 crc kubenswrapper[5120]: I0122 12:46:00.152644 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="882b7ca2-9793-49f3-b5e8-883119a96591" containerName="collect-profiles" Jan 22 12:46:00 crc kubenswrapper[5120]: I0122 12:46:00.153001 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="882b7ca2-9793-49f3-b5e8-883119a96591" containerName="collect-profiles" Jan 22 12:46:00 crc kubenswrapper[5120]: I0122 12:46:00.158420 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484766-r7mx5" Jan 22 12:46:00 crc kubenswrapper[5120]: I0122 12:46:00.161944 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 22 12:46:00 crc kubenswrapper[5120]: I0122 12:46:00.162353 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g2chw\"" Jan 22 12:46:00 crc kubenswrapper[5120]: I0122 12:46:00.164226 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 22 12:46:00 crc kubenswrapper[5120]: I0122 12:46:00.170258 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484766-r7mx5"] Jan 22 12:46:00 crc kubenswrapper[5120]: I0122 12:46:00.340268 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6q6vp\" (UniqueName: \"kubernetes.io/projected/f4e688fc-6166-4472-9385-e06fa5bc818b-kube-api-access-6q6vp\") pod \"auto-csr-approver-29484766-r7mx5\" (UID: \"f4e688fc-6166-4472-9385-e06fa5bc818b\") " pod="openshift-infra/auto-csr-approver-29484766-r7mx5" Jan 22 12:46:00 crc kubenswrapper[5120]: I0122 12:46:00.441791 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6q6vp\" (UniqueName: \"kubernetes.io/projected/f4e688fc-6166-4472-9385-e06fa5bc818b-kube-api-access-6q6vp\") pod \"auto-csr-approver-29484766-r7mx5\" (UID: \"f4e688fc-6166-4472-9385-e06fa5bc818b\") " pod="openshift-infra/auto-csr-approver-29484766-r7mx5" Jan 22 12:46:00 crc kubenswrapper[5120]: I0122 12:46:00.475755 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6q6vp\" (UniqueName: \"kubernetes.io/projected/f4e688fc-6166-4472-9385-e06fa5bc818b-kube-api-access-6q6vp\") pod \"auto-csr-approver-29484766-r7mx5\" (UID: \"f4e688fc-6166-4472-9385-e06fa5bc818b\") " pod="openshift-infra/auto-csr-approver-29484766-r7mx5" Jan 22 12:46:00 crc kubenswrapper[5120]: I0122 12:46:00.481621 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484766-r7mx5" Jan 22 12:46:00 crc kubenswrapper[5120]: I0122 12:46:00.991827 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484766-r7mx5"] Jan 22 12:46:01 crc kubenswrapper[5120]: I0122 12:46:01.056501 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484766-r7mx5" event={"ID":"f4e688fc-6166-4472-9385-e06fa5bc818b","Type":"ContainerStarted","Data":"57ffd3f0749e58b9469f2abea60a7fcc1c2e503b0e7f4355ea0b617ec51ab4a1"} Jan 22 12:46:03 crc kubenswrapper[5120]: I0122 12:46:03.100871 5120 generic.go:358] "Generic (PLEG): container finished" podID="f4e688fc-6166-4472-9385-e06fa5bc818b" containerID="93255bc069317c1b98c7e5d464d634946dfb59ed2823b2a9ae9c562272242064" exitCode=0 Jan 22 12:46:03 crc kubenswrapper[5120]: I0122 12:46:03.101029 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484766-r7mx5" event={"ID":"f4e688fc-6166-4472-9385-e06fa5bc818b","Type":"ContainerDied","Data":"93255bc069317c1b98c7e5d464d634946dfb59ed2823b2a9ae9c562272242064"} Jan 22 12:46:04 crc kubenswrapper[5120]: I0122 12:46:04.292109 5120 scope.go:117] "RemoveContainer" containerID="73df242a325822ccf1cead216fb72d99d7eb4b7f40cfe98bdeb214c25306e468" Jan 22 12:46:04 crc kubenswrapper[5120]: I0122 12:46:04.438125 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484766-r7mx5" Jan 22 12:46:04 crc kubenswrapper[5120]: I0122 12:46:04.511229 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6q6vp\" (UniqueName: \"kubernetes.io/projected/f4e688fc-6166-4472-9385-e06fa5bc818b-kube-api-access-6q6vp\") pod \"f4e688fc-6166-4472-9385-e06fa5bc818b\" (UID: \"f4e688fc-6166-4472-9385-e06fa5bc818b\") " Jan 22 12:46:04 crc kubenswrapper[5120]: I0122 12:46:04.518346 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4e688fc-6166-4472-9385-e06fa5bc818b-kube-api-access-6q6vp" (OuterVolumeSpecName: "kube-api-access-6q6vp") pod "f4e688fc-6166-4472-9385-e06fa5bc818b" (UID: "f4e688fc-6166-4472-9385-e06fa5bc818b"). InnerVolumeSpecName "kube-api-access-6q6vp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:46:04 crc kubenswrapper[5120]: I0122 12:46:04.614582 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6q6vp\" (UniqueName: \"kubernetes.io/projected/f4e688fc-6166-4472-9385-e06fa5bc818b-kube-api-access-6q6vp\") on node \"crc\" DevicePath \"\"" Jan 22 12:46:05 crc kubenswrapper[5120]: I0122 12:46:05.124970 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484766-r7mx5" event={"ID":"f4e688fc-6166-4472-9385-e06fa5bc818b","Type":"ContainerDied","Data":"57ffd3f0749e58b9469f2abea60a7fcc1c2e503b0e7f4355ea0b617ec51ab4a1"} Jan 22 12:46:05 crc kubenswrapper[5120]: I0122 12:46:05.125342 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57ffd3f0749e58b9469f2abea60a7fcc1c2e503b0e7f4355ea0b617ec51ab4a1" Jan 22 12:46:05 crc kubenswrapper[5120]: I0122 12:46:05.124984 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484766-r7mx5" Jan 22 12:46:05 crc kubenswrapper[5120]: I0122 12:46:05.538617 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29484760-9gmsd"] Jan 22 12:46:05 crc kubenswrapper[5120]: I0122 12:46:05.549563 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29484760-9gmsd"] Jan 22 12:46:05 crc kubenswrapper[5120]: I0122 12:46:05.602717 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eaee48fe-e9ab-42e2-926c-6d27414eec47" path="/var/lib/kubelet/pods/eaee48fe-e9ab-42e2-926c-6d27414eec47/volumes" Jan 22 12:46:06 crc kubenswrapper[5120]: I0122 12:46:06.571412 5120 scope.go:117] "RemoveContainer" containerID="cc40d4bcc65892547f86eaafc1dc9dcde42f467dc0cf6f78c66127d13693b626" Jan 22 12:46:07 crc kubenswrapper[5120]: I0122 12:46:07.144280 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" event={"ID":"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9","Type":"ContainerStarted","Data":"29f33e1dc1313dbff18da2384f7e62acd6b281793c17af877e5bfeb2aa570d05"} Jan 22 12:47:04 crc kubenswrapper[5120]: I0122 12:47:04.371669 5120 scope.go:117] "RemoveContainer" containerID="d38a722a84b2b8810e74617131f1d0281e3449f071650edfa7fce4122e413c26" Jan 22 12:47:52 crc kubenswrapper[5120]: I0122 12:47:52.235440 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4lzht_67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087/kube-multus/0.log" Jan 22 12:47:52 crc kubenswrapper[5120]: I0122 12:47:52.236755 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4lzht_67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087/kube-multus/0.log" Jan 22 12:47:52 crc kubenswrapper[5120]: I0122 12:47:52.248929 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 22 12:47:52 crc kubenswrapper[5120]: I0122 12:47:52.249083 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 22 12:48:00 crc kubenswrapper[5120]: I0122 12:48:00.142462 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29484768-cfmpc"] Jan 22 12:48:00 crc kubenswrapper[5120]: I0122 12:48:00.144620 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f4e688fc-6166-4472-9385-e06fa5bc818b" containerName="oc" Jan 22 12:48:00 crc kubenswrapper[5120]: I0122 12:48:00.144640 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4e688fc-6166-4472-9385-e06fa5bc818b" containerName="oc" Jan 22 12:48:00 crc kubenswrapper[5120]: I0122 12:48:00.144846 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="f4e688fc-6166-4472-9385-e06fa5bc818b" containerName="oc" Jan 22 12:48:00 crc kubenswrapper[5120]: I0122 12:48:00.152370 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484768-cfmpc" Jan 22 12:48:00 crc kubenswrapper[5120]: I0122 12:48:00.157700 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 22 12:48:00 crc kubenswrapper[5120]: I0122 12:48:00.157943 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 22 12:48:00 crc kubenswrapper[5120]: I0122 12:48:00.157749 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g2chw\"" Jan 22 12:48:00 crc kubenswrapper[5120]: I0122 12:48:00.163247 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484768-cfmpc"] Jan 22 12:48:00 crc kubenswrapper[5120]: I0122 12:48:00.235844 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpztc\" (UniqueName: \"kubernetes.io/projected/f1196931-91a2-4869-bff6-80785ee0ed43-kube-api-access-zpztc\") pod \"auto-csr-approver-29484768-cfmpc\" (UID: \"f1196931-91a2-4869-bff6-80785ee0ed43\") " pod="openshift-infra/auto-csr-approver-29484768-cfmpc" Jan 22 12:48:00 crc kubenswrapper[5120]: I0122 12:48:00.338083 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zpztc\" (UniqueName: \"kubernetes.io/projected/f1196931-91a2-4869-bff6-80785ee0ed43-kube-api-access-zpztc\") pod \"auto-csr-approver-29484768-cfmpc\" (UID: \"f1196931-91a2-4869-bff6-80785ee0ed43\") " pod="openshift-infra/auto-csr-approver-29484768-cfmpc" Jan 22 12:48:00 crc kubenswrapper[5120]: I0122 12:48:00.378476 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpztc\" (UniqueName: \"kubernetes.io/projected/f1196931-91a2-4869-bff6-80785ee0ed43-kube-api-access-zpztc\") pod \"auto-csr-approver-29484768-cfmpc\" (UID: \"f1196931-91a2-4869-bff6-80785ee0ed43\") " pod="openshift-infra/auto-csr-approver-29484768-cfmpc" Jan 22 12:48:00 crc kubenswrapper[5120]: I0122 12:48:00.481263 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484768-cfmpc" Jan 22 12:48:00 crc kubenswrapper[5120]: I0122 12:48:00.810532 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484768-cfmpc"] Jan 22 12:48:00 crc kubenswrapper[5120]: I0122 12:48:00.931262 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484768-cfmpc" event={"ID":"f1196931-91a2-4869-bff6-80785ee0ed43","Type":"ContainerStarted","Data":"1e40c7568c23c2fbd806122ab6571af7e58c89c117d744276fc9ff6c70409e6c"} Jan 22 12:48:02 crc kubenswrapper[5120]: I0122 12:48:02.947834 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484768-cfmpc" event={"ID":"f1196931-91a2-4869-bff6-80785ee0ed43","Type":"ContainerStarted","Data":"daf41329d180dcc37fb3f371cdaf516e4d7ff24c8288949d26f7303b4e826d13"} Jan 22 12:48:03 crc kubenswrapper[5120]: I0122 12:48:03.959738 5120 generic.go:358] "Generic (PLEG): container finished" podID="f1196931-91a2-4869-bff6-80785ee0ed43" containerID="daf41329d180dcc37fb3f371cdaf516e4d7ff24c8288949d26f7303b4e826d13" exitCode=0 Jan 22 12:48:03 crc kubenswrapper[5120]: I0122 12:48:03.959859 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484768-cfmpc" event={"ID":"f1196931-91a2-4869-bff6-80785ee0ed43","Type":"ContainerDied","Data":"daf41329d180dcc37fb3f371cdaf516e4d7ff24c8288949d26f7303b4e826d13"} Jan 22 12:48:05 crc kubenswrapper[5120]: I0122 12:48:05.260905 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484768-cfmpc" Jan 22 12:48:05 crc kubenswrapper[5120]: I0122 12:48:05.431916 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpztc\" (UniqueName: \"kubernetes.io/projected/f1196931-91a2-4869-bff6-80785ee0ed43-kube-api-access-zpztc\") pod \"f1196931-91a2-4869-bff6-80785ee0ed43\" (UID: \"f1196931-91a2-4869-bff6-80785ee0ed43\") " Jan 22 12:48:05 crc kubenswrapper[5120]: I0122 12:48:05.438247 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1196931-91a2-4869-bff6-80785ee0ed43-kube-api-access-zpztc" (OuterVolumeSpecName: "kube-api-access-zpztc") pod "f1196931-91a2-4869-bff6-80785ee0ed43" (UID: "f1196931-91a2-4869-bff6-80785ee0ed43"). InnerVolumeSpecName "kube-api-access-zpztc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:48:05 crc kubenswrapper[5120]: I0122 12:48:05.533517 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zpztc\" (UniqueName: \"kubernetes.io/projected/f1196931-91a2-4869-bff6-80785ee0ed43-kube-api-access-zpztc\") on node \"crc\" DevicePath \"\"" Jan 22 12:48:05 crc kubenswrapper[5120]: I0122 12:48:05.981637 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484768-cfmpc" Jan 22 12:48:05 crc kubenswrapper[5120]: I0122 12:48:05.981678 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484768-cfmpc" event={"ID":"f1196931-91a2-4869-bff6-80785ee0ed43","Type":"ContainerDied","Data":"1e40c7568c23c2fbd806122ab6571af7e58c89c117d744276fc9ff6c70409e6c"} Jan 22 12:48:05 crc kubenswrapper[5120]: I0122 12:48:05.981888 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e40c7568c23c2fbd806122ab6571af7e58c89c117d744276fc9ff6c70409e6c" Jan 22 12:48:06 crc kubenswrapper[5120]: I0122 12:48:06.019260 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29484762-tjrcq"] Jan 22 12:48:06 crc kubenswrapper[5120]: I0122 12:48:06.024027 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29484762-tjrcq"] Jan 22 12:48:07 crc kubenswrapper[5120]: I0122 12:48:07.582823 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4579a92b-d731-4627-b131-998575817977" path="/var/lib/kubelet/pods/4579a92b-d731-4627-b131-998575817977/volumes" Jan 22 12:48:31 crc kubenswrapper[5120]: I0122 12:48:31.972700 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:48:31 crc kubenswrapper[5120]: I0122 12:48:31.973259 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:49:01 crc kubenswrapper[5120]: I0122 12:49:01.972562 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:49:01 crc kubenswrapper[5120]: I0122 12:49:01.973300 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:49:04 crc kubenswrapper[5120]: I0122 12:49:04.555483 5120 scope.go:117] "RemoveContainer" containerID="2a6f5b0d983a897bcecca87bafc7ac00eaf5f0a889d5650209a6e10cf38669b5" Jan 22 12:49:31 crc kubenswrapper[5120]: I0122 12:49:31.972187 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:49:31 crc kubenswrapper[5120]: I0122 12:49:31.974047 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:49:31 crc kubenswrapper[5120]: I0122 12:49:31.974525 5120 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dq269" Jan 22 12:49:31 crc kubenswrapper[5120]: I0122 12:49:31.975268 5120 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"29f33e1dc1313dbff18da2384f7e62acd6b281793c17af877e5bfeb2aa570d05"} pod="openshift-machine-config-operator/machine-config-daemon-dq269" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 12:49:31 crc kubenswrapper[5120]: I0122 12:49:31.975406 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" containerID="cri-o://29f33e1dc1313dbff18da2384f7e62acd6b281793c17af877e5bfeb2aa570d05" gracePeriod=600 Jan 22 12:49:32 crc kubenswrapper[5120]: I0122 12:49:32.609696 5120 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 12:49:33 crc kubenswrapper[5120]: I0122 12:49:33.258558 5120 generic.go:358] "Generic (PLEG): container finished" podID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerID="29f33e1dc1313dbff18da2384f7e62acd6b281793c17af877e5bfeb2aa570d05" exitCode=0 Jan 22 12:49:33 crc kubenswrapper[5120]: I0122 12:49:33.259448 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" event={"ID":"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9","Type":"ContainerDied","Data":"29f33e1dc1313dbff18da2384f7e62acd6b281793c17af877e5bfeb2aa570d05"} Jan 22 12:49:33 crc kubenswrapper[5120]: I0122 12:49:33.260163 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" event={"ID":"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9","Type":"ContainerStarted","Data":"dbf558918fffbef59164dd4f2880112da5ee7c772edfd9eec91c378b2021782e"} Jan 22 12:49:33 crc kubenswrapper[5120]: I0122 12:49:33.260196 5120 scope.go:117] "RemoveContainer" containerID="cc40d4bcc65892547f86eaafc1dc9dcde42f467dc0cf6f78c66127d13693b626" Jan 22 12:50:00 crc kubenswrapper[5120]: I0122 12:50:00.146440 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29484770-td669"] Jan 22 12:50:00 crc kubenswrapper[5120]: I0122 12:50:00.147736 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f1196931-91a2-4869-bff6-80785ee0ed43" containerName="oc" Jan 22 12:50:00 crc kubenswrapper[5120]: I0122 12:50:00.147757 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1196931-91a2-4869-bff6-80785ee0ed43" containerName="oc" Jan 22 12:50:00 crc kubenswrapper[5120]: I0122 12:50:00.147943 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="f1196931-91a2-4869-bff6-80785ee0ed43" containerName="oc" Jan 22 12:50:00 crc kubenswrapper[5120]: I0122 12:50:00.176586 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484770-td669"] Jan 22 12:50:00 crc kubenswrapper[5120]: I0122 12:50:00.176656 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484770-td669" Jan 22 12:50:00 crc kubenswrapper[5120]: I0122 12:50:00.178934 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g2chw\"" Jan 22 12:50:00 crc kubenswrapper[5120]: I0122 12:50:00.179104 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 22 12:50:00 crc kubenswrapper[5120]: I0122 12:50:00.180081 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6574\" (UniqueName: \"kubernetes.io/projected/730d9559-f767-44f0-9346-cfba60c8f1b5-kube-api-access-w6574\") pod \"auto-csr-approver-29484770-td669\" (UID: \"730d9559-f767-44f0-9346-cfba60c8f1b5\") " pod="openshift-infra/auto-csr-approver-29484770-td669" Jan 22 12:50:00 crc kubenswrapper[5120]: I0122 12:50:00.180134 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 22 12:50:00 crc kubenswrapper[5120]: I0122 12:50:00.281160 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w6574\" (UniqueName: \"kubernetes.io/projected/730d9559-f767-44f0-9346-cfba60c8f1b5-kube-api-access-w6574\") pod \"auto-csr-approver-29484770-td669\" (UID: \"730d9559-f767-44f0-9346-cfba60c8f1b5\") " pod="openshift-infra/auto-csr-approver-29484770-td669" Jan 22 12:50:00 crc kubenswrapper[5120]: I0122 12:50:00.307151 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6574\" (UniqueName: \"kubernetes.io/projected/730d9559-f767-44f0-9346-cfba60c8f1b5-kube-api-access-w6574\") pod \"auto-csr-approver-29484770-td669\" (UID: \"730d9559-f767-44f0-9346-cfba60c8f1b5\") " pod="openshift-infra/auto-csr-approver-29484770-td669" Jan 22 12:50:00 crc kubenswrapper[5120]: I0122 12:50:00.499605 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484770-td669" Jan 22 12:50:01 crc kubenswrapper[5120]: I0122 12:50:01.007655 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484770-td669"] Jan 22 12:50:01 crc kubenswrapper[5120]: I0122 12:50:01.523764 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484770-td669" event={"ID":"730d9559-f767-44f0-9346-cfba60c8f1b5","Type":"ContainerStarted","Data":"d1c95edc24c0d39e7de4f3b0e81675dd758bdd9d2a6b7cd372aedc16d036dce8"} Jan 22 12:50:02 crc kubenswrapper[5120]: I0122 12:50:02.534604 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484770-td669" event={"ID":"730d9559-f767-44f0-9346-cfba60c8f1b5","Type":"ContainerStarted","Data":"727bb28f7a024f28e2f883ea6ba608737fc5ddb620fdace8b333e8edb2713483"} Jan 22 12:50:02 crc kubenswrapper[5120]: I0122 12:50:02.553087 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29484770-td669" podStartSLOduration=1.5382092379999999 podStartE2EDuration="2.553048203s" podCreationTimestamp="2026-01-22 12:50:00 +0000 UTC" firstStartedPulling="2026-01-22 12:50:01.022022133 +0000 UTC m=+3735.765970474" lastFinishedPulling="2026-01-22 12:50:02.036861058 +0000 UTC m=+3736.780809439" observedRunningTime="2026-01-22 12:50:02.550524734 +0000 UTC m=+3737.294473075" watchObservedRunningTime="2026-01-22 12:50:02.553048203 +0000 UTC m=+3737.296996544" Jan 22 12:50:03 crc kubenswrapper[5120]: I0122 12:50:03.543986 5120 generic.go:358] "Generic (PLEG): container finished" podID="730d9559-f767-44f0-9346-cfba60c8f1b5" containerID="727bb28f7a024f28e2f883ea6ba608737fc5ddb620fdace8b333e8edb2713483" exitCode=0 Jan 22 12:50:03 crc kubenswrapper[5120]: I0122 12:50:03.544078 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484770-td669" event={"ID":"730d9559-f767-44f0-9346-cfba60c8f1b5","Type":"ContainerDied","Data":"727bb28f7a024f28e2f883ea6ba608737fc5ddb620fdace8b333e8edb2713483"} Jan 22 12:50:04 crc kubenswrapper[5120]: I0122 12:50:04.740206 5120 scope.go:117] "RemoveContainer" containerID="ee52f0be235791cdfb04c7d77af1b138bf274fd830340153c8f962eccee34da4" Jan 22 12:50:04 crc kubenswrapper[5120]: I0122 12:50:04.776017 5120 scope.go:117] "RemoveContainer" containerID="8438862cfd80a291a8ce8d21963ab85a62a3192253e9207c21bfb82f7e78df12" Jan 22 12:50:04 crc kubenswrapper[5120]: I0122 12:50:04.801362 5120 scope.go:117] "RemoveContainer" containerID="47afaf343a8e57a2141b4fca7f97fbb2810bf0c2eee6c99703640a2db6eb664b" Jan 22 12:50:04 crc kubenswrapper[5120]: I0122 12:50:04.877402 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484770-td669" Jan 22 12:50:04 crc kubenswrapper[5120]: I0122 12:50:04.958647 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6574\" (UniqueName: \"kubernetes.io/projected/730d9559-f767-44f0-9346-cfba60c8f1b5-kube-api-access-w6574\") pod \"730d9559-f767-44f0-9346-cfba60c8f1b5\" (UID: \"730d9559-f767-44f0-9346-cfba60c8f1b5\") " Jan 22 12:50:04 crc kubenswrapper[5120]: I0122 12:50:04.966942 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/730d9559-f767-44f0-9346-cfba60c8f1b5-kube-api-access-w6574" (OuterVolumeSpecName: "kube-api-access-w6574") pod "730d9559-f767-44f0-9346-cfba60c8f1b5" (UID: "730d9559-f767-44f0-9346-cfba60c8f1b5"). InnerVolumeSpecName "kube-api-access-w6574". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:50:05 crc kubenswrapper[5120]: I0122 12:50:05.061087 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w6574\" (UniqueName: \"kubernetes.io/projected/730d9559-f767-44f0-9346-cfba60c8f1b5-kube-api-access-w6574\") on node \"crc\" DevicePath \"\"" Jan 22 12:50:05 crc kubenswrapper[5120]: I0122 12:50:05.567088 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484770-td669" event={"ID":"730d9559-f767-44f0-9346-cfba60c8f1b5","Type":"ContainerDied","Data":"d1c95edc24c0d39e7de4f3b0e81675dd758bdd9d2a6b7cd372aedc16d036dce8"} Jan 22 12:50:05 crc kubenswrapper[5120]: I0122 12:50:05.567367 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1c95edc24c0d39e7de4f3b0e81675dd758bdd9d2a6b7cd372aedc16d036dce8" Jan 22 12:50:05 crc kubenswrapper[5120]: I0122 12:50:05.567104 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484770-td669" Jan 22 12:50:05 crc kubenswrapper[5120]: I0122 12:50:05.630840 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29484764-lssmg"] Jan 22 12:50:05 crc kubenswrapper[5120]: I0122 12:50:05.640843 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29484764-lssmg"] Jan 22 12:50:07 crc kubenswrapper[5120]: I0122 12:50:07.589567 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5973be67-1e77-468f-aace-0dc45ba40609" path="/var/lib/kubelet/pods/5973be67-1e77-468f-aace-0dc45ba40609/volumes" Jan 22 12:51:04 crc kubenswrapper[5120]: I0122 12:51:04.964133 5120 scope.go:117] "RemoveContainer" containerID="da1b834fe11918b7b503fbd82eb99354219ce8355dd6b17dd9e4af5acf161805" Jan 22 12:52:00 crc kubenswrapper[5120]: I0122 12:52:00.151905 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29484772-rwp4t"] Jan 22 12:52:00 crc kubenswrapper[5120]: I0122 12:52:00.153279 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="730d9559-f767-44f0-9346-cfba60c8f1b5" containerName="oc" Jan 22 12:52:00 crc kubenswrapper[5120]: I0122 12:52:00.153293 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="730d9559-f767-44f0-9346-cfba60c8f1b5" containerName="oc" Jan 22 12:52:00 crc kubenswrapper[5120]: I0122 12:52:00.153421 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="730d9559-f767-44f0-9346-cfba60c8f1b5" containerName="oc" Jan 22 12:52:00 crc kubenswrapper[5120]: I0122 12:52:00.172009 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484772-rwp4t"] Jan 22 12:52:00 crc kubenswrapper[5120]: I0122 12:52:00.172150 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484772-rwp4t" Jan 22 12:52:00 crc kubenswrapper[5120]: I0122 12:52:00.181839 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 22 12:52:00 crc kubenswrapper[5120]: I0122 12:52:00.182032 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 22 12:52:00 crc kubenswrapper[5120]: I0122 12:52:00.182404 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g2chw\"" Jan 22 12:52:00 crc kubenswrapper[5120]: I0122 12:52:00.283931 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2kls\" (UniqueName: \"kubernetes.io/projected/813b78c4-6644-444f-baa4-af92c9a1bfd0-kube-api-access-z2kls\") pod \"auto-csr-approver-29484772-rwp4t\" (UID: \"813b78c4-6644-444f-baa4-af92c9a1bfd0\") " pod="openshift-infra/auto-csr-approver-29484772-rwp4t" Jan 22 12:52:00 crc kubenswrapper[5120]: I0122 12:52:00.386757 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z2kls\" (UniqueName: \"kubernetes.io/projected/813b78c4-6644-444f-baa4-af92c9a1bfd0-kube-api-access-z2kls\") pod \"auto-csr-approver-29484772-rwp4t\" (UID: \"813b78c4-6644-444f-baa4-af92c9a1bfd0\") " pod="openshift-infra/auto-csr-approver-29484772-rwp4t" Jan 22 12:52:00 crc kubenswrapper[5120]: I0122 12:52:00.424841 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2kls\" (UniqueName: \"kubernetes.io/projected/813b78c4-6644-444f-baa4-af92c9a1bfd0-kube-api-access-z2kls\") pod \"auto-csr-approver-29484772-rwp4t\" (UID: \"813b78c4-6644-444f-baa4-af92c9a1bfd0\") " pod="openshift-infra/auto-csr-approver-29484772-rwp4t" Jan 22 12:52:00 crc kubenswrapper[5120]: I0122 12:52:00.501809 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484772-rwp4t" Jan 22 12:52:00 crc kubenswrapper[5120]: I0122 12:52:00.753145 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484772-rwp4t"] Jan 22 12:52:01 crc kubenswrapper[5120]: I0122 12:52:01.678576 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484772-rwp4t" event={"ID":"813b78c4-6644-444f-baa4-af92c9a1bfd0","Type":"ContainerStarted","Data":"e77e3facd6342da7f82a78dd95e5ff9cfa5f434248b1f49ab7b339060ac887ef"} Jan 22 12:52:01 crc kubenswrapper[5120]: I0122 12:52:01.973200 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:52:01 crc kubenswrapper[5120]: I0122 12:52:01.973272 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:52:02 crc kubenswrapper[5120]: I0122 12:52:02.687896 5120 generic.go:358] "Generic (PLEG): container finished" podID="813b78c4-6644-444f-baa4-af92c9a1bfd0" containerID="ff8af05f7b27c4b094ab8e8f34a856e723d09850f96dc8e0d652385ae56780a8" exitCode=0 Jan 22 12:52:02 crc kubenswrapper[5120]: I0122 12:52:02.688191 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484772-rwp4t" event={"ID":"813b78c4-6644-444f-baa4-af92c9a1bfd0","Type":"ContainerDied","Data":"ff8af05f7b27c4b094ab8e8f34a856e723d09850f96dc8e0d652385ae56780a8"} Jan 22 12:52:03 crc kubenswrapper[5120]: I0122 12:52:03.972832 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484772-rwp4t" Jan 22 12:52:04 crc kubenswrapper[5120]: I0122 12:52:04.049181 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2kls\" (UniqueName: \"kubernetes.io/projected/813b78c4-6644-444f-baa4-af92c9a1bfd0-kube-api-access-z2kls\") pod \"813b78c4-6644-444f-baa4-af92c9a1bfd0\" (UID: \"813b78c4-6644-444f-baa4-af92c9a1bfd0\") " Jan 22 12:52:04 crc kubenswrapper[5120]: I0122 12:52:04.075182 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/813b78c4-6644-444f-baa4-af92c9a1bfd0-kube-api-access-z2kls" (OuterVolumeSpecName: "kube-api-access-z2kls") pod "813b78c4-6644-444f-baa4-af92c9a1bfd0" (UID: "813b78c4-6644-444f-baa4-af92c9a1bfd0"). InnerVolumeSpecName "kube-api-access-z2kls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:52:04 crc kubenswrapper[5120]: I0122 12:52:04.151023 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z2kls\" (UniqueName: \"kubernetes.io/projected/813b78c4-6644-444f-baa4-af92c9a1bfd0-kube-api-access-z2kls\") on node \"crc\" DevicePath \"\"" Jan 22 12:52:04 crc kubenswrapper[5120]: I0122 12:52:04.709835 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484772-rwp4t" Jan 22 12:52:04 crc kubenswrapper[5120]: I0122 12:52:04.710066 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484772-rwp4t" event={"ID":"813b78c4-6644-444f-baa4-af92c9a1bfd0","Type":"ContainerDied","Data":"e77e3facd6342da7f82a78dd95e5ff9cfa5f434248b1f49ab7b339060ac887ef"} Jan 22 12:52:04 crc kubenswrapper[5120]: I0122 12:52:04.710136 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e77e3facd6342da7f82a78dd95e5ff9cfa5f434248b1f49ab7b339060ac887ef" Jan 22 12:52:05 crc kubenswrapper[5120]: I0122 12:52:05.046246 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29484766-r7mx5"] Jan 22 12:52:05 crc kubenswrapper[5120]: I0122 12:52:05.053391 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29484766-r7mx5"] Jan 22 12:52:05 crc kubenswrapper[5120]: I0122 12:52:05.587493 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4e688fc-6166-4472-9385-e06fa5bc818b" path="/var/lib/kubelet/pods/f4e688fc-6166-4472-9385-e06fa5bc818b/volumes" Jan 22 12:52:27 crc kubenswrapper[5120]: I0122 12:52:27.215353 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2mccj"] Jan 22 12:52:27 crc kubenswrapper[5120]: I0122 12:52:27.218151 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="813b78c4-6644-444f-baa4-af92c9a1bfd0" containerName="oc" Jan 22 12:52:27 crc kubenswrapper[5120]: I0122 12:52:27.218216 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="813b78c4-6644-444f-baa4-af92c9a1bfd0" containerName="oc" Jan 22 12:52:27 crc kubenswrapper[5120]: I0122 12:52:27.218506 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="813b78c4-6644-444f-baa4-af92c9a1bfd0" containerName="oc" Jan 22 12:52:27 crc kubenswrapper[5120]: I0122 12:52:27.259248 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2mccj"] Jan 22 12:52:27 crc kubenswrapper[5120]: I0122 12:52:27.259524 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2mccj" Jan 22 12:52:27 crc kubenswrapper[5120]: I0122 12:52:27.372761 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c0d5290-04f4-4490-ad2f-54d0bf67056d-catalog-content\") pod \"redhat-operators-2mccj\" (UID: \"2c0d5290-04f4-4490-ad2f-54d0bf67056d\") " pod="openshift-marketplace/redhat-operators-2mccj" Jan 22 12:52:27 crc kubenswrapper[5120]: I0122 12:52:27.372819 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c75c5\" (UniqueName: \"kubernetes.io/projected/2c0d5290-04f4-4490-ad2f-54d0bf67056d-kube-api-access-c75c5\") pod \"redhat-operators-2mccj\" (UID: \"2c0d5290-04f4-4490-ad2f-54d0bf67056d\") " pod="openshift-marketplace/redhat-operators-2mccj" Jan 22 12:52:27 crc kubenswrapper[5120]: I0122 12:52:27.373009 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c0d5290-04f4-4490-ad2f-54d0bf67056d-utilities\") pod \"redhat-operators-2mccj\" (UID: \"2c0d5290-04f4-4490-ad2f-54d0bf67056d\") " pod="openshift-marketplace/redhat-operators-2mccj" Jan 22 12:52:27 crc kubenswrapper[5120]: I0122 12:52:27.473967 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c0d5290-04f4-4490-ad2f-54d0bf67056d-catalog-content\") pod \"redhat-operators-2mccj\" (UID: \"2c0d5290-04f4-4490-ad2f-54d0bf67056d\") " pod="openshift-marketplace/redhat-operators-2mccj" Jan 22 12:52:27 crc kubenswrapper[5120]: I0122 12:52:27.474017 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-c75c5\" (UniqueName: \"kubernetes.io/projected/2c0d5290-04f4-4490-ad2f-54d0bf67056d-kube-api-access-c75c5\") pod \"redhat-operators-2mccj\" (UID: \"2c0d5290-04f4-4490-ad2f-54d0bf67056d\") " pod="openshift-marketplace/redhat-operators-2mccj" Jan 22 12:52:27 crc kubenswrapper[5120]: I0122 12:52:27.474062 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c0d5290-04f4-4490-ad2f-54d0bf67056d-utilities\") pod \"redhat-operators-2mccj\" (UID: \"2c0d5290-04f4-4490-ad2f-54d0bf67056d\") " pod="openshift-marketplace/redhat-operators-2mccj" Jan 22 12:52:27 crc kubenswrapper[5120]: I0122 12:52:27.474670 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c0d5290-04f4-4490-ad2f-54d0bf67056d-utilities\") pod \"redhat-operators-2mccj\" (UID: \"2c0d5290-04f4-4490-ad2f-54d0bf67056d\") " pod="openshift-marketplace/redhat-operators-2mccj" Jan 22 12:52:27 crc kubenswrapper[5120]: I0122 12:52:27.476223 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c0d5290-04f4-4490-ad2f-54d0bf67056d-catalog-content\") pod \"redhat-operators-2mccj\" (UID: \"2c0d5290-04f4-4490-ad2f-54d0bf67056d\") " pod="openshift-marketplace/redhat-operators-2mccj" Jan 22 12:52:27 crc kubenswrapper[5120]: I0122 12:52:27.493712 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-c75c5\" (UniqueName: \"kubernetes.io/projected/2c0d5290-04f4-4490-ad2f-54d0bf67056d-kube-api-access-c75c5\") pod \"redhat-operators-2mccj\" (UID: \"2c0d5290-04f4-4490-ad2f-54d0bf67056d\") " pod="openshift-marketplace/redhat-operators-2mccj" Jan 22 12:52:27 crc kubenswrapper[5120]: I0122 12:52:27.584150 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2mccj" Jan 22 12:52:27 crc kubenswrapper[5120]: I0122 12:52:27.828822 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2mccj"] Jan 22 12:52:27 crc kubenswrapper[5120]: I0122 12:52:27.959314 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2mccj" event={"ID":"2c0d5290-04f4-4490-ad2f-54d0bf67056d","Type":"ContainerStarted","Data":"4098ba92d3db347255dbcff80e5f1759f819e0281dbb289fcce2e22253a6b5a2"} Jan 22 12:52:28 crc kubenswrapper[5120]: I0122 12:52:28.968800 5120 generic.go:358] "Generic (PLEG): container finished" podID="2c0d5290-04f4-4490-ad2f-54d0bf67056d" containerID="162b16dff8c9528bc383815df37636ea7bbb576c5690a25382665ab3c39c3363" exitCode=0 Jan 22 12:52:28 crc kubenswrapper[5120]: I0122 12:52:28.968952 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2mccj" event={"ID":"2c0d5290-04f4-4490-ad2f-54d0bf67056d","Type":"ContainerDied","Data":"162b16dff8c9528bc383815df37636ea7bbb576c5690a25382665ab3c39c3363"} Jan 22 12:52:31 crc kubenswrapper[5120]: I0122 12:52:31.972326 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:52:31 crc kubenswrapper[5120]: I0122 12:52:31.972902 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:52:31 crc kubenswrapper[5120]: I0122 12:52:31.995852 5120 generic.go:358] "Generic (PLEG): container finished" podID="2c0d5290-04f4-4490-ad2f-54d0bf67056d" containerID="7fad32a96b7e772564230b9fea865007345d5f66f7e10b57f4c5d9abf74358b7" exitCode=0 Jan 22 12:52:31 crc kubenswrapper[5120]: I0122 12:52:31.996023 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2mccj" event={"ID":"2c0d5290-04f4-4490-ad2f-54d0bf67056d","Type":"ContainerDied","Data":"7fad32a96b7e772564230b9fea865007345d5f66f7e10b57f4c5d9abf74358b7"} Jan 22 12:52:33 crc kubenswrapper[5120]: I0122 12:52:33.007585 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2mccj" event={"ID":"2c0d5290-04f4-4490-ad2f-54d0bf67056d","Type":"ContainerStarted","Data":"b5d07cd20e5a85c731aa5c021d474c6528646651fba3a07c09e5f46778b64c0d"} Jan 22 12:52:33 crc kubenswrapper[5120]: I0122 12:52:33.035104 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2mccj" podStartSLOduration=4.00324177 podStartE2EDuration="6.035081911s" podCreationTimestamp="2026-01-22 12:52:27 +0000 UTC" firstStartedPulling="2026-01-22 12:52:28.970631167 +0000 UTC m=+3883.714579538" lastFinishedPulling="2026-01-22 12:52:31.002471338 +0000 UTC m=+3885.746419679" observedRunningTime="2026-01-22 12:52:33.030680059 +0000 UTC m=+3887.774628490" watchObservedRunningTime="2026-01-22 12:52:33.035081911 +0000 UTC m=+3887.779030262" Jan 22 12:52:37 crc kubenswrapper[5120]: I0122 12:52:37.585125 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-2mccj" Jan 22 12:52:37 crc kubenswrapper[5120]: I0122 12:52:37.585733 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2mccj" Jan 22 12:52:38 crc kubenswrapper[5120]: I0122 12:52:38.634884 5120 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2mccj" podUID="2c0d5290-04f4-4490-ad2f-54d0bf67056d" containerName="registry-server" probeResult="failure" output=< Jan 22 12:52:38 crc kubenswrapper[5120]: timeout: failed to connect service ":50051" within 1s Jan 22 12:52:38 crc kubenswrapper[5120]: > Jan 22 12:52:47 crc kubenswrapper[5120]: I0122 12:52:47.655284 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2mccj" Jan 22 12:52:47 crc kubenswrapper[5120]: I0122 12:52:47.717637 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2mccj" Jan 22 12:52:47 crc kubenswrapper[5120]: I0122 12:52:47.904007 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2mccj"] Jan 22 12:52:49 crc kubenswrapper[5120]: I0122 12:52:49.165546 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2mccj" podUID="2c0d5290-04f4-4490-ad2f-54d0bf67056d" containerName="registry-server" containerID="cri-o://b5d07cd20e5a85c731aa5c021d474c6528646651fba3a07c09e5f46778b64c0d" gracePeriod=2 Jan 22 12:52:49 crc kubenswrapper[5120]: I0122 12:52:49.547679 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2mccj" Jan 22 12:52:49 crc kubenswrapper[5120]: I0122 12:52:49.660316 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c0d5290-04f4-4490-ad2f-54d0bf67056d-catalog-content\") pod \"2c0d5290-04f4-4490-ad2f-54d0bf67056d\" (UID: \"2c0d5290-04f4-4490-ad2f-54d0bf67056d\") " Jan 22 12:52:49 crc kubenswrapper[5120]: I0122 12:52:49.660437 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c75c5\" (UniqueName: \"kubernetes.io/projected/2c0d5290-04f4-4490-ad2f-54d0bf67056d-kube-api-access-c75c5\") pod \"2c0d5290-04f4-4490-ad2f-54d0bf67056d\" (UID: \"2c0d5290-04f4-4490-ad2f-54d0bf67056d\") " Jan 22 12:52:49 crc kubenswrapper[5120]: I0122 12:52:49.660456 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c0d5290-04f4-4490-ad2f-54d0bf67056d-utilities\") pod \"2c0d5290-04f4-4490-ad2f-54d0bf67056d\" (UID: \"2c0d5290-04f4-4490-ad2f-54d0bf67056d\") " Jan 22 12:52:49 crc kubenswrapper[5120]: I0122 12:52:49.675241 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c0d5290-04f4-4490-ad2f-54d0bf67056d-utilities" (OuterVolumeSpecName: "utilities") pod "2c0d5290-04f4-4490-ad2f-54d0bf67056d" (UID: "2c0d5290-04f4-4490-ad2f-54d0bf67056d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:52:49 crc kubenswrapper[5120]: I0122 12:52:49.680555 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c0d5290-04f4-4490-ad2f-54d0bf67056d-kube-api-access-c75c5" (OuterVolumeSpecName: "kube-api-access-c75c5") pod "2c0d5290-04f4-4490-ad2f-54d0bf67056d" (UID: "2c0d5290-04f4-4490-ad2f-54d0bf67056d"). InnerVolumeSpecName "kube-api-access-c75c5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:52:49 crc kubenswrapper[5120]: I0122 12:52:49.755673 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c0d5290-04f4-4490-ad2f-54d0bf67056d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2c0d5290-04f4-4490-ad2f-54d0bf67056d" (UID: "2c0d5290-04f4-4490-ad2f-54d0bf67056d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:52:49 crc kubenswrapper[5120]: I0122 12:52:49.762718 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-c75c5\" (UniqueName: \"kubernetes.io/projected/2c0d5290-04f4-4490-ad2f-54d0bf67056d-kube-api-access-c75c5\") on node \"crc\" DevicePath \"\"" Jan 22 12:52:49 crc kubenswrapper[5120]: I0122 12:52:49.762869 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c0d5290-04f4-4490-ad2f-54d0bf67056d-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 12:52:49 crc kubenswrapper[5120]: I0122 12:52:49.762953 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c0d5290-04f4-4490-ad2f-54d0bf67056d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 12:52:50 crc kubenswrapper[5120]: I0122 12:52:50.200814 5120 generic.go:358] "Generic (PLEG): container finished" podID="2c0d5290-04f4-4490-ad2f-54d0bf67056d" containerID="b5d07cd20e5a85c731aa5c021d474c6528646651fba3a07c09e5f46778b64c0d" exitCode=0 Jan 22 12:52:50 crc kubenswrapper[5120]: I0122 12:52:50.200999 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2mccj" event={"ID":"2c0d5290-04f4-4490-ad2f-54d0bf67056d","Type":"ContainerDied","Data":"b5d07cd20e5a85c731aa5c021d474c6528646651fba3a07c09e5f46778b64c0d"} Jan 22 12:52:50 crc kubenswrapper[5120]: I0122 12:52:50.201478 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2mccj" event={"ID":"2c0d5290-04f4-4490-ad2f-54d0bf67056d","Type":"ContainerDied","Data":"4098ba92d3db347255dbcff80e5f1759f819e0281dbb289fcce2e22253a6b5a2"} Jan 22 12:52:50 crc kubenswrapper[5120]: I0122 12:52:50.201121 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2mccj" Jan 22 12:52:50 crc kubenswrapper[5120]: I0122 12:52:50.201568 5120 scope.go:117] "RemoveContainer" containerID="b5d07cd20e5a85c731aa5c021d474c6528646651fba3a07c09e5f46778b64c0d" Jan 22 12:52:50 crc kubenswrapper[5120]: I0122 12:52:50.235691 5120 scope.go:117] "RemoveContainer" containerID="7fad32a96b7e772564230b9fea865007345d5f66f7e10b57f4c5d9abf74358b7" Jan 22 12:52:50 crc kubenswrapper[5120]: I0122 12:52:50.252686 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2mccj"] Jan 22 12:52:50 crc kubenswrapper[5120]: I0122 12:52:50.259314 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2mccj"] Jan 22 12:52:50 crc kubenswrapper[5120]: I0122 12:52:50.274344 5120 scope.go:117] "RemoveContainer" containerID="162b16dff8c9528bc383815df37636ea7bbb576c5690a25382665ab3c39c3363" Jan 22 12:52:50 crc kubenswrapper[5120]: I0122 12:52:50.304232 5120 scope.go:117] "RemoveContainer" containerID="b5d07cd20e5a85c731aa5c021d474c6528646651fba3a07c09e5f46778b64c0d" Jan 22 12:52:50 crc kubenswrapper[5120]: E0122 12:52:50.305049 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5d07cd20e5a85c731aa5c021d474c6528646651fba3a07c09e5f46778b64c0d\": container with ID starting with b5d07cd20e5a85c731aa5c021d474c6528646651fba3a07c09e5f46778b64c0d not found: ID does not exist" containerID="b5d07cd20e5a85c731aa5c021d474c6528646651fba3a07c09e5f46778b64c0d" Jan 22 12:52:50 crc kubenswrapper[5120]: I0122 12:52:50.305103 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5d07cd20e5a85c731aa5c021d474c6528646651fba3a07c09e5f46778b64c0d"} err="failed to get container status \"b5d07cd20e5a85c731aa5c021d474c6528646651fba3a07c09e5f46778b64c0d\": rpc error: code = NotFound desc = could not find container \"b5d07cd20e5a85c731aa5c021d474c6528646651fba3a07c09e5f46778b64c0d\": container with ID starting with b5d07cd20e5a85c731aa5c021d474c6528646651fba3a07c09e5f46778b64c0d not found: ID does not exist" Jan 22 12:52:50 crc kubenswrapper[5120]: I0122 12:52:50.305138 5120 scope.go:117] "RemoveContainer" containerID="7fad32a96b7e772564230b9fea865007345d5f66f7e10b57f4c5d9abf74358b7" Jan 22 12:52:50 crc kubenswrapper[5120]: E0122 12:52:50.305622 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7fad32a96b7e772564230b9fea865007345d5f66f7e10b57f4c5d9abf74358b7\": container with ID starting with 7fad32a96b7e772564230b9fea865007345d5f66f7e10b57f4c5d9abf74358b7 not found: ID does not exist" containerID="7fad32a96b7e772564230b9fea865007345d5f66f7e10b57f4c5d9abf74358b7" Jan 22 12:52:50 crc kubenswrapper[5120]: I0122 12:52:50.305674 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fad32a96b7e772564230b9fea865007345d5f66f7e10b57f4c5d9abf74358b7"} err="failed to get container status \"7fad32a96b7e772564230b9fea865007345d5f66f7e10b57f4c5d9abf74358b7\": rpc error: code = NotFound desc = could not find container \"7fad32a96b7e772564230b9fea865007345d5f66f7e10b57f4c5d9abf74358b7\": container with ID starting with 7fad32a96b7e772564230b9fea865007345d5f66f7e10b57f4c5d9abf74358b7 not found: ID does not exist" Jan 22 12:52:50 crc kubenswrapper[5120]: I0122 12:52:50.305737 5120 scope.go:117] "RemoveContainer" containerID="162b16dff8c9528bc383815df37636ea7bbb576c5690a25382665ab3c39c3363" Jan 22 12:52:50 crc kubenswrapper[5120]: E0122 12:52:50.306205 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"162b16dff8c9528bc383815df37636ea7bbb576c5690a25382665ab3c39c3363\": container with ID starting with 162b16dff8c9528bc383815df37636ea7bbb576c5690a25382665ab3c39c3363 not found: ID does not exist" containerID="162b16dff8c9528bc383815df37636ea7bbb576c5690a25382665ab3c39c3363" Jan 22 12:52:50 crc kubenswrapper[5120]: I0122 12:52:50.306248 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"162b16dff8c9528bc383815df37636ea7bbb576c5690a25382665ab3c39c3363"} err="failed to get container status \"162b16dff8c9528bc383815df37636ea7bbb576c5690a25382665ab3c39c3363\": rpc error: code = NotFound desc = could not find container \"162b16dff8c9528bc383815df37636ea7bbb576c5690a25382665ab3c39c3363\": container with ID starting with 162b16dff8c9528bc383815df37636ea7bbb576c5690a25382665ab3c39c3363 not found: ID does not exist" Jan 22 12:52:51 crc kubenswrapper[5120]: I0122 12:52:51.586575 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c0d5290-04f4-4490-ad2f-54d0bf67056d" path="/var/lib/kubelet/pods/2c0d5290-04f4-4490-ad2f-54d0bf67056d/volumes" Jan 22 12:52:52 crc kubenswrapper[5120]: I0122 12:52:52.349090 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4lzht_67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087/kube-multus/0.log" Jan 22 12:52:52 crc kubenswrapper[5120]: I0122 12:52:52.352297 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4lzht_67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087/kube-multus/0.log" Jan 22 12:52:52 crc kubenswrapper[5120]: I0122 12:52:52.359382 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 22 12:52:52 crc kubenswrapper[5120]: I0122 12:52:52.361327 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 22 12:53:01 crc kubenswrapper[5120]: I0122 12:53:01.972316 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:53:01 crc kubenswrapper[5120]: I0122 12:53:01.972907 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:53:01 crc kubenswrapper[5120]: I0122 12:53:01.972971 5120 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dq269" Jan 22 12:53:01 crc kubenswrapper[5120]: I0122 12:53:01.973650 5120 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dbf558918fffbef59164dd4f2880112da5ee7c772edfd9eec91c378b2021782e"} pod="openshift-machine-config-operator/machine-config-daemon-dq269" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 12:53:01 crc kubenswrapper[5120]: I0122 12:53:01.973708 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" containerID="cri-o://dbf558918fffbef59164dd4f2880112da5ee7c772edfd9eec91c378b2021782e" gracePeriod=600 Jan 22 12:53:02 crc kubenswrapper[5120]: E0122 12:53:02.109278 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:53:02 crc kubenswrapper[5120]: I0122 12:53:02.310593 5120 generic.go:358] "Generic (PLEG): container finished" podID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerID="dbf558918fffbef59164dd4f2880112da5ee7c772edfd9eec91c378b2021782e" exitCode=0 Jan 22 12:53:02 crc kubenswrapper[5120]: I0122 12:53:02.310638 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" event={"ID":"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9","Type":"ContainerDied","Data":"dbf558918fffbef59164dd4f2880112da5ee7c772edfd9eec91c378b2021782e"} Jan 22 12:53:02 crc kubenswrapper[5120]: I0122 12:53:02.310708 5120 scope.go:117] "RemoveContainer" containerID="29f33e1dc1313dbff18da2384f7e62acd6b281793c17af877e5bfeb2aa570d05" Jan 22 12:53:02 crc kubenswrapper[5120]: I0122 12:53:02.311506 5120 scope.go:117] "RemoveContainer" containerID="dbf558918fffbef59164dd4f2880112da5ee7c772edfd9eec91c378b2021782e" Jan 22 12:53:02 crc kubenswrapper[5120]: E0122 12:53:02.312149 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:53:05 crc kubenswrapper[5120]: I0122 12:53:05.138771 5120 scope.go:117] "RemoveContainer" containerID="93255bc069317c1b98c7e5d464d634946dfb59ed2823b2a9ae9c562272242064" Jan 22 12:53:16 crc kubenswrapper[5120]: I0122 12:53:16.572579 5120 scope.go:117] "RemoveContainer" containerID="dbf558918fffbef59164dd4f2880112da5ee7c772edfd9eec91c378b2021782e" Jan 22 12:53:16 crc kubenswrapper[5120]: E0122 12:53:16.578080 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:53:29 crc kubenswrapper[5120]: I0122 12:53:29.572159 5120 scope.go:117] "RemoveContainer" containerID="dbf558918fffbef59164dd4f2880112da5ee7c772edfd9eec91c378b2021782e" Jan 22 12:53:29 crc kubenswrapper[5120]: E0122 12:53:29.574505 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:53:42 crc kubenswrapper[5120]: I0122 12:53:42.571730 5120 scope.go:117] "RemoveContainer" containerID="dbf558918fffbef59164dd4f2880112da5ee7c772edfd9eec91c378b2021782e" Jan 22 12:53:42 crc kubenswrapper[5120]: E0122 12:53:42.572853 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:53:46 crc kubenswrapper[5120]: I0122 12:53:46.215672 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ttj2q"] Jan 22 12:53:46 crc kubenswrapper[5120]: I0122 12:53:46.216817 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2c0d5290-04f4-4490-ad2f-54d0bf67056d" containerName="extract-content" Jan 22 12:53:46 crc kubenswrapper[5120]: I0122 12:53:46.216835 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c0d5290-04f4-4490-ad2f-54d0bf67056d" containerName="extract-content" Jan 22 12:53:46 crc kubenswrapper[5120]: I0122 12:53:46.216864 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2c0d5290-04f4-4490-ad2f-54d0bf67056d" containerName="extract-utilities" Jan 22 12:53:46 crc kubenswrapper[5120]: I0122 12:53:46.216872 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c0d5290-04f4-4490-ad2f-54d0bf67056d" containerName="extract-utilities" Jan 22 12:53:46 crc kubenswrapper[5120]: I0122 12:53:46.216888 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2c0d5290-04f4-4490-ad2f-54d0bf67056d" containerName="registry-server" Jan 22 12:53:46 crc kubenswrapper[5120]: I0122 12:53:46.216896 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c0d5290-04f4-4490-ad2f-54d0bf67056d" containerName="registry-server" Jan 22 12:53:46 crc kubenswrapper[5120]: I0122 12:53:46.217287 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="2c0d5290-04f4-4490-ad2f-54d0bf67056d" containerName="registry-server" Jan 22 12:53:46 crc kubenswrapper[5120]: I0122 12:53:46.241995 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ttj2q"] Jan 22 12:53:46 crc kubenswrapper[5120]: I0122 12:53:46.242164 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ttj2q" Jan 22 12:53:46 crc kubenswrapper[5120]: I0122 12:53:46.389737 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-547jd\" (UniqueName: \"kubernetes.io/projected/b22aea1c-2669-424a-8776-4b9474da6cc6-kube-api-access-547jd\") pod \"certified-operators-ttj2q\" (UID: \"b22aea1c-2669-424a-8776-4b9474da6cc6\") " pod="openshift-marketplace/certified-operators-ttj2q" Jan 22 12:53:46 crc kubenswrapper[5120]: I0122 12:53:46.389841 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b22aea1c-2669-424a-8776-4b9474da6cc6-utilities\") pod \"certified-operators-ttj2q\" (UID: \"b22aea1c-2669-424a-8776-4b9474da6cc6\") " pod="openshift-marketplace/certified-operators-ttj2q" Jan 22 12:53:46 crc kubenswrapper[5120]: I0122 12:53:46.389987 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b22aea1c-2669-424a-8776-4b9474da6cc6-catalog-content\") pod \"certified-operators-ttj2q\" (UID: \"b22aea1c-2669-424a-8776-4b9474da6cc6\") " pod="openshift-marketplace/certified-operators-ttj2q" Jan 22 12:53:46 crc kubenswrapper[5120]: I0122 12:53:46.491158 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b22aea1c-2669-424a-8776-4b9474da6cc6-catalog-content\") pod \"certified-operators-ttj2q\" (UID: \"b22aea1c-2669-424a-8776-4b9474da6cc6\") " pod="openshift-marketplace/certified-operators-ttj2q" Jan 22 12:53:46 crc kubenswrapper[5120]: I0122 12:53:46.491238 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-547jd\" (UniqueName: \"kubernetes.io/projected/b22aea1c-2669-424a-8776-4b9474da6cc6-kube-api-access-547jd\") pod \"certified-operators-ttj2q\" (UID: \"b22aea1c-2669-424a-8776-4b9474da6cc6\") " pod="openshift-marketplace/certified-operators-ttj2q" Jan 22 12:53:46 crc kubenswrapper[5120]: I0122 12:53:46.491273 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b22aea1c-2669-424a-8776-4b9474da6cc6-utilities\") pod \"certified-operators-ttj2q\" (UID: \"b22aea1c-2669-424a-8776-4b9474da6cc6\") " pod="openshift-marketplace/certified-operators-ttj2q" Jan 22 12:53:46 crc kubenswrapper[5120]: I0122 12:53:46.491783 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b22aea1c-2669-424a-8776-4b9474da6cc6-utilities\") pod \"certified-operators-ttj2q\" (UID: \"b22aea1c-2669-424a-8776-4b9474da6cc6\") " pod="openshift-marketplace/certified-operators-ttj2q" Jan 22 12:53:46 crc kubenswrapper[5120]: I0122 12:53:46.492076 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b22aea1c-2669-424a-8776-4b9474da6cc6-catalog-content\") pod \"certified-operators-ttj2q\" (UID: \"b22aea1c-2669-424a-8776-4b9474da6cc6\") " pod="openshift-marketplace/certified-operators-ttj2q" Jan 22 12:53:46 crc kubenswrapper[5120]: I0122 12:53:46.513378 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-547jd\" (UniqueName: \"kubernetes.io/projected/b22aea1c-2669-424a-8776-4b9474da6cc6-kube-api-access-547jd\") pod \"certified-operators-ttj2q\" (UID: \"b22aea1c-2669-424a-8776-4b9474da6cc6\") " pod="openshift-marketplace/certified-operators-ttj2q" Jan 22 12:53:46 crc kubenswrapper[5120]: I0122 12:53:46.568416 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ttj2q" Jan 22 12:53:47 crc kubenswrapper[5120]: I0122 12:53:47.018098 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ttj2q"] Jan 22 12:53:47 crc kubenswrapper[5120]: I0122 12:53:47.740496 5120 generic.go:358] "Generic (PLEG): container finished" podID="b22aea1c-2669-424a-8776-4b9474da6cc6" containerID="5833dc3ae185a65e097352702074f280ed152077bf58cb97b99c78c2346ec892" exitCode=0 Jan 22 12:53:47 crc kubenswrapper[5120]: I0122 12:53:47.741235 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ttj2q" event={"ID":"b22aea1c-2669-424a-8776-4b9474da6cc6","Type":"ContainerDied","Data":"5833dc3ae185a65e097352702074f280ed152077bf58cb97b99c78c2346ec892"} Jan 22 12:53:47 crc kubenswrapper[5120]: I0122 12:53:47.741302 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ttj2q" event={"ID":"b22aea1c-2669-424a-8776-4b9474da6cc6","Type":"ContainerStarted","Data":"bbe5e653076d109859ae55de05148dcacbdc0b4bbccf2a0b9c171c70f3e3127a"} Jan 22 12:53:49 crc kubenswrapper[5120]: I0122 12:53:49.765381 5120 generic.go:358] "Generic (PLEG): container finished" podID="b22aea1c-2669-424a-8776-4b9474da6cc6" containerID="e6840dec376324ececd4e016ce8b48e6a676dc885832cc587472e901e7d2908f" exitCode=0 Jan 22 12:53:49 crc kubenswrapper[5120]: I0122 12:53:49.765517 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ttj2q" event={"ID":"b22aea1c-2669-424a-8776-4b9474da6cc6","Type":"ContainerDied","Data":"e6840dec376324ececd4e016ce8b48e6a676dc885832cc587472e901e7d2908f"} Jan 22 12:53:50 crc kubenswrapper[5120]: I0122 12:53:50.776779 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ttj2q" event={"ID":"b22aea1c-2669-424a-8776-4b9474da6cc6","Type":"ContainerStarted","Data":"2965a4added7097db620f58cb08c1a933a5bebaa440e5044ad9affa00812c8e0"} Jan 22 12:53:50 crc kubenswrapper[5120]: I0122 12:53:50.800803 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ttj2q" podStartSLOduration=3.972924517 podStartE2EDuration="4.800786861s" podCreationTimestamp="2026-01-22 12:53:46 +0000 UTC" firstStartedPulling="2026-01-22 12:53:47.743551792 +0000 UTC m=+3962.487500173" lastFinishedPulling="2026-01-22 12:53:48.571414146 +0000 UTC m=+3963.315362517" observedRunningTime="2026-01-22 12:53:50.797582427 +0000 UTC m=+3965.541530768" watchObservedRunningTime="2026-01-22 12:53:50.800786861 +0000 UTC m=+3965.544735192" Jan 22 12:53:53 crc kubenswrapper[5120]: I0122 12:53:53.572759 5120 scope.go:117] "RemoveContainer" containerID="dbf558918fffbef59164dd4f2880112da5ee7c772edfd9eec91c378b2021782e" Jan 22 12:53:53 crc kubenswrapper[5120]: E0122 12:53:53.573634 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:53:56 crc kubenswrapper[5120]: I0122 12:53:56.569044 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-ttj2q" Jan 22 12:53:56 crc kubenswrapper[5120]: I0122 12:53:56.569491 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ttj2q" Jan 22 12:53:56 crc kubenswrapper[5120]: I0122 12:53:56.630636 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ttj2q" Jan 22 12:53:56 crc kubenswrapper[5120]: I0122 12:53:56.903674 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ttj2q" Jan 22 12:53:56 crc kubenswrapper[5120]: I0122 12:53:56.974842 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ttj2q"] Jan 22 12:53:58 crc kubenswrapper[5120]: I0122 12:53:58.842452 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ttj2q" podUID="b22aea1c-2669-424a-8776-4b9474da6cc6" containerName="registry-server" containerID="cri-o://2965a4added7097db620f58cb08c1a933a5bebaa440e5044ad9affa00812c8e0" gracePeriod=2 Jan 22 12:53:59 crc kubenswrapper[5120]: I0122 12:53:59.873454 5120 generic.go:358] "Generic (PLEG): container finished" podID="b22aea1c-2669-424a-8776-4b9474da6cc6" containerID="2965a4added7097db620f58cb08c1a933a5bebaa440e5044ad9affa00812c8e0" exitCode=0 Jan 22 12:53:59 crc kubenswrapper[5120]: I0122 12:53:59.873599 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ttj2q" event={"ID":"b22aea1c-2669-424a-8776-4b9474da6cc6","Type":"ContainerDied","Data":"2965a4added7097db620f58cb08c1a933a5bebaa440e5044ad9affa00812c8e0"} Jan 22 12:53:59 crc kubenswrapper[5120]: I0122 12:53:59.952149 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ttj2q" Jan 22 12:54:00 crc kubenswrapper[5120]: I0122 12:54:00.047754 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-547jd\" (UniqueName: \"kubernetes.io/projected/b22aea1c-2669-424a-8776-4b9474da6cc6-kube-api-access-547jd\") pod \"b22aea1c-2669-424a-8776-4b9474da6cc6\" (UID: \"b22aea1c-2669-424a-8776-4b9474da6cc6\") " Jan 22 12:54:00 crc kubenswrapper[5120]: I0122 12:54:00.047875 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b22aea1c-2669-424a-8776-4b9474da6cc6-catalog-content\") pod \"b22aea1c-2669-424a-8776-4b9474da6cc6\" (UID: \"b22aea1c-2669-424a-8776-4b9474da6cc6\") " Jan 22 12:54:00 crc kubenswrapper[5120]: I0122 12:54:00.047908 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b22aea1c-2669-424a-8776-4b9474da6cc6-utilities\") pod \"b22aea1c-2669-424a-8776-4b9474da6cc6\" (UID: \"b22aea1c-2669-424a-8776-4b9474da6cc6\") " Jan 22 12:54:00 crc kubenswrapper[5120]: I0122 12:54:00.049637 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b22aea1c-2669-424a-8776-4b9474da6cc6-utilities" (OuterVolumeSpecName: "utilities") pod "b22aea1c-2669-424a-8776-4b9474da6cc6" (UID: "b22aea1c-2669-424a-8776-4b9474da6cc6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:54:00 crc kubenswrapper[5120]: I0122 12:54:00.064062 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b22aea1c-2669-424a-8776-4b9474da6cc6-kube-api-access-547jd" (OuterVolumeSpecName: "kube-api-access-547jd") pod "b22aea1c-2669-424a-8776-4b9474da6cc6" (UID: "b22aea1c-2669-424a-8776-4b9474da6cc6"). InnerVolumeSpecName "kube-api-access-547jd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:54:00 crc kubenswrapper[5120]: I0122 12:54:00.102834 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b22aea1c-2669-424a-8776-4b9474da6cc6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b22aea1c-2669-424a-8776-4b9474da6cc6" (UID: "b22aea1c-2669-424a-8776-4b9474da6cc6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:54:00 crc kubenswrapper[5120]: I0122 12:54:00.138778 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29484774-q5l42"] Jan 22 12:54:00 crc kubenswrapper[5120]: I0122 12:54:00.139769 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b22aea1c-2669-424a-8776-4b9474da6cc6" containerName="registry-server" Jan 22 12:54:00 crc kubenswrapper[5120]: I0122 12:54:00.139797 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="b22aea1c-2669-424a-8776-4b9474da6cc6" containerName="registry-server" Jan 22 12:54:00 crc kubenswrapper[5120]: I0122 12:54:00.139810 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b22aea1c-2669-424a-8776-4b9474da6cc6" containerName="extract-utilities" Jan 22 12:54:00 crc kubenswrapper[5120]: I0122 12:54:00.139817 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="b22aea1c-2669-424a-8776-4b9474da6cc6" containerName="extract-utilities" Jan 22 12:54:00 crc kubenswrapper[5120]: I0122 12:54:00.139839 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b22aea1c-2669-424a-8776-4b9474da6cc6" containerName="extract-content" Jan 22 12:54:00 crc kubenswrapper[5120]: I0122 12:54:00.139846 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="b22aea1c-2669-424a-8776-4b9474da6cc6" containerName="extract-content" Jan 22 12:54:00 crc kubenswrapper[5120]: I0122 12:54:00.140019 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="b22aea1c-2669-424a-8776-4b9474da6cc6" containerName="registry-server" Jan 22 12:54:00 crc kubenswrapper[5120]: I0122 12:54:00.154235 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-547jd\" (UniqueName: \"kubernetes.io/projected/b22aea1c-2669-424a-8776-4b9474da6cc6-kube-api-access-547jd\") on node \"crc\" DevicePath \"\"" Jan 22 12:54:00 crc kubenswrapper[5120]: I0122 12:54:00.154294 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b22aea1c-2669-424a-8776-4b9474da6cc6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 12:54:00 crc kubenswrapper[5120]: I0122 12:54:00.154310 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b22aea1c-2669-424a-8776-4b9474da6cc6-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 12:54:00 crc kubenswrapper[5120]: I0122 12:54:00.192005 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484774-q5l42" Jan 22 12:54:00 crc kubenswrapper[5120]: I0122 12:54:00.192711 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484774-q5l42"] Jan 22 12:54:00 crc kubenswrapper[5120]: I0122 12:54:00.196451 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 22 12:54:00 crc kubenswrapper[5120]: I0122 12:54:00.196967 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 22 12:54:00 crc kubenswrapper[5120]: I0122 12:54:00.200064 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g2chw\"" Jan 22 12:54:00 crc kubenswrapper[5120]: I0122 12:54:00.255873 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmqq9\" (UniqueName: \"kubernetes.io/projected/38e09f33-037b-4402-b891-c7d84dca4e0c-kube-api-access-zmqq9\") pod \"auto-csr-approver-29484774-q5l42\" (UID: \"38e09f33-037b-4402-b891-c7d84dca4e0c\") " pod="openshift-infra/auto-csr-approver-29484774-q5l42" Jan 22 12:54:00 crc kubenswrapper[5120]: I0122 12:54:00.358359 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zmqq9\" (UniqueName: \"kubernetes.io/projected/38e09f33-037b-4402-b891-c7d84dca4e0c-kube-api-access-zmqq9\") pod \"auto-csr-approver-29484774-q5l42\" (UID: \"38e09f33-037b-4402-b891-c7d84dca4e0c\") " pod="openshift-infra/auto-csr-approver-29484774-q5l42" Jan 22 12:54:00 crc kubenswrapper[5120]: I0122 12:54:00.383623 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmqq9\" (UniqueName: \"kubernetes.io/projected/38e09f33-037b-4402-b891-c7d84dca4e0c-kube-api-access-zmqq9\") pod \"auto-csr-approver-29484774-q5l42\" (UID: \"38e09f33-037b-4402-b891-c7d84dca4e0c\") " pod="openshift-infra/auto-csr-approver-29484774-q5l42" Jan 22 12:54:00 crc kubenswrapper[5120]: I0122 12:54:00.525321 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484774-q5l42" Jan 22 12:54:00 crc kubenswrapper[5120]: I0122 12:54:00.885971 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ttj2q" event={"ID":"b22aea1c-2669-424a-8776-4b9474da6cc6","Type":"ContainerDied","Data":"bbe5e653076d109859ae55de05148dcacbdc0b4bbccf2a0b9c171c70f3e3127a"} Jan 22 12:54:00 crc kubenswrapper[5120]: I0122 12:54:00.886760 5120 scope.go:117] "RemoveContainer" containerID="2965a4added7097db620f58cb08c1a933a5bebaa440e5044ad9affa00812c8e0" Jan 22 12:54:00 crc kubenswrapper[5120]: I0122 12:54:00.886372 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ttj2q" Jan 22 12:54:00 crc kubenswrapper[5120]: I0122 12:54:00.920533 5120 scope.go:117] "RemoveContainer" containerID="e6840dec376324ececd4e016ce8b48e6a676dc885832cc587472e901e7d2908f" Jan 22 12:54:00 crc kubenswrapper[5120]: I0122 12:54:00.942838 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ttj2q"] Jan 22 12:54:00 crc kubenswrapper[5120]: I0122 12:54:00.951023 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ttj2q"] Jan 22 12:54:00 crc kubenswrapper[5120]: I0122 12:54:00.953433 5120 scope.go:117] "RemoveContainer" containerID="5833dc3ae185a65e097352702074f280ed152077bf58cb97b99c78c2346ec892" Jan 22 12:54:01 crc kubenswrapper[5120]: I0122 12:54:01.006991 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484774-q5l42"] Jan 22 12:54:01 crc kubenswrapper[5120]: W0122 12:54:01.016678 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38e09f33_037b_4402_b891_c7d84dca4e0c.slice/crio-c62806bc2e287cc16cb3e3310fc4518bf6ef52d7986550d785abeb8abb82cf5d WatchSource:0}: Error finding container c62806bc2e287cc16cb3e3310fc4518bf6ef52d7986550d785abeb8abb82cf5d: Status 404 returned error can't find the container with id c62806bc2e287cc16cb3e3310fc4518bf6ef52d7986550d785abeb8abb82cf5d Jan 22 12:54:01 crc kubenswrapper[5120]: I0122 12:54:01.595561 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b22aea1c-2669-424a-8776-4b9474da6cc6" path="/var/lib/kubelet/pods/b22aea1c-2669-424a-8776-4b9474da6cc6/volumes" Jan 22 12:54:01 crc kubenswrapper[5120]: I0122 12:54:01.899905 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484774-q5l42" event={"ID":"38e09f33-037b-4402-b891-c7d84dca4e0c","Type":"ContainerStarted","Data":"c62806bc2e287cc16cb3e3310fc4518bf6ef52d7986550d785abeb8abb82cf5d"} Jan 22 12:54:02 crc kubenswrapper[5120]: I0122 12:54:02.920289 5120 generic.go:358] "Generic (PLEG): container finished" podID="38e09f33-037b-4402-b891-c7d84dca4e0c" containerID="6c22ec5cf52431656565b52791c399038ffbf4be2b60a8f90c1423eff5eb1f04" exitCode=0 Jan 22 12:54:02 crc kubenswrapper[5120]: I0122 12:54:02.920847 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484774-q5l42" event={"ID":"38e09f33-037b-4402-b891-c7d84dca4e0c","Type":"ContainerDied","Data":"6c22ec5cf52431656565b52791c399038ffbf4be2b60a8f90c1423eff5eb1f04"} Jan 22 12:54:04 crc kubenswrapper[5120]: I0122 12:54:04.183362 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484774-q5l42" Jan 22 12:54:04 crc kubenswrapper[5120]: I0122 12:54:04.326067 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmqq9\" (UniqueName: \"kubernetes.io/projected/38e09f33-037b-4402-b891-c7d84dca4e0c-kube-api-access-zmqq9\") pod \"38e09f33-037b-4402-b891-c7d84dca4e0c\" (UID: \"38e09f33-037b-4402-b891-c7d84dca4e0c\") " Jan 22 12:54:04 crc kubenswrapper[5120]: I0122 12:54:04.334904 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38e09f33-037b-4402-b891-c7d84dca4e0c-kube-api-access-zmqq9" (OuterVolumeSpecName: "kube-api-access-zmqq9") pod "38e09f33-037b-4402-b891-c7d84dca4e0c" (UID: "38e09f33-037b-4402-b891-c7d84dca4e0c"). InnerVolumeSpecName "kube-api-access-zmqq9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:54:04 crc kubenswrapper[5120]: I0122 12:54:04.427623 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zmqq9\" (UniqueName: \"kubernetes.io/projected/38e09f33-037b-4402-b891-c7d84dca4e0c-kube-api-access-zmqq9\") on node \"crc\" DevicePath \"\"" Jan 22 12:54:04 crc kubenswrapper[5120]: I0122 12:54:04.942621 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484774-q5l42" event={"ID":"38e09f33-037b-4402-b891-c7d84dca4e0c","Type":"ContainerDied","Data":"c62806bc2e287cc16cb3e3310fc4518bf6ef52d7986550d785abeb8abb82cf5d"} Jan 22 12:54:04 crc kubenswrapper[5120]: I0122 12:54:04.942716 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c62806bc2e287cc16cb3e3310fc4518bf6ef52d7986550d785abeb8abb82cf5d" Jan 22 12:54:04 crc kubenswrapper[5120]: I0122 12:54:04.943162 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484774-q5l42" Jan 22 12:54:05 crc kubenswrapper[5120]: I0122 12:54:05.271569 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29484768-cfmpc"] Jan 22 12:54:05 crc kubenswrapper[5120]: I0122 12:54:05.283788 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29484768-cfmpc"] Jan 22 12:54:05 crc kubenswrapper[5120]: I0122 12:54:05.605413 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1196931-91a2-4869-bff6-80785ee0ed43" path="/var/lib/kubelet/pods/f1196931-91a2-4869-bff6-80785ee0ed43/volumes" Jan 22 12:54:06 crc kubenswrapper[5120]: I0122 12:54:06.572256 5120 scope.go:117] "RemoveContainer" containerID="dbf558918fffbef59164dd4f2880112da5ee7c772edfd9eec91c378b2021782e" Jan 22 12:54:06 crc kubenswrapper[5120]: E0122 12:54:06.572422 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:54:17 crc kubenswrapper[5120]: I0122 12:54:17.572533 5120 scope.go:117] "RemoveContainer" containerID="dbf558918fffbef59164dd4f2880112da5ee7c772edfd9eec91c378b2021782e" Jan 22 12:54:17 crc kubenswrapper[5120]: E0122 12:54:17.573536 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:54:31 crc kubenswrapper[5120]: I0122 12:54:31.572947 5120 scope.go:117] "RemoveContainer" containerID="dbf558918fffbef59164dd4f2880112da5ee7c772edfd9eec91c378b2021782e" Jan 22 12:54:31 crc kubenswrapper[5120]: E0122 12:54:31.574306 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:54:45 crc kubenswrapper[5120]: I0122 12:54:45.598136 5120 scope.go:117] "RemoveContainer" containerID="dbf558918fffbef59164dd4f2880112da5ee7c772edfd9eec91c378b2021782e" Jan 22 12:54:45 crc kubenswrapper[5120]: E0122 12:54:45.599189 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:54:56 crc kubenswrapper[5120]: I0122 12:54:56.571739 5120 scope.go:117] "RemoveContainer" containerID="dbf558918fffbef59164dd4f2880112da5ee7c772edfd9eec91c378b2021782e" Jan 22 12:54:56 crc kubenswrapper[5120]: E0122 12:54:56.572409 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:55:05 crc kubenswrapper[5120]: I0122 12:55:05.314023 5120 scope.go:117] "RemoveContainer" containerID="daf41329d180dcc37fb3f371cdaf516e4d7ff24c8288949d26f7303b4e826d13" Jan 22 12:55:10 crc kubenswrapper[5120]: I0122 12:55:10.573635 5120 scope.go:117] "RemoveContainer" containerID="dbf558918fffbef59164dd4f2880112da5ee7c772edfd9eec91c378b2021782e" Jan 22 12:55:10 crc kubenswrapper[5120]: E0122 12:55:10.575200 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:55:22 crc kubenswrapper[5120]: I0122 12:55:22.571560 5120 scope.go:117] "RemoveContainer" containerID="dbf558918fffbef59164dd4f2880112da5ee7c772edfd9eec91c378b2021782e" Jan 22 12:55:22 crc kubenswrapper[5120]: E0122 12:55:22.572720 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:55:34 crc kubenswrapper[5120]: I0122 12:55:34.571785 5120 scope.go:117] "RemoveContainer" containerID="dbf558918fffbef59164dd4f2880112da5ee7c772edfd9eec91c378b2021782e" Jan 22 12:55:34 crc kubenswrapper[5120]: E0122 12:55:34.573491 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:55:49 crc kubenswrapper[5120]: I0122 12:55:49.572693 5120 scope.go:117] "RemoveContainer" containerID="dbf558918fffbef59164dd4f2880112da5ee7c772edfd9eec91c378b2021782e" Jan 22 12:55:49 crc kubenswrapper[5120]: E0122 12:55:49.574089 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:56:00 crc kubenswrapper[5120]: I0122 12:56:00.154126 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29484776-zrrxj"] Jan 22 12:56:00 crc kubenswrapper[5120]: I0122 12:56:00.155302 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="38e09f33-037b-4402-b891-c7d84dca4e0c" containerName="oc" Jan 22 12:56:00 crc kubenswrapper[5120]: I0122 12:56:00.155315 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="38e09f33-037b-4402-b891-c7d84dca4e0c" containerName="oc" Jan 22 12:56:00 crc kubenswrapper[5120]: I0122 12:56:00.155474 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="38e09f33-037b-4402-b891-c7d84dca4e0c" containerName="oc" Jan 22 12:56:00 crc kubenswrapper[5120]: I0122 12:56:00.164845 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484776-zrrxj"] Jan 22 12:56:00 crc kubenswrapper[5120]: I0122 12:56:00.164942 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484776-zrrxj" Jan 22 12:56:00 crc kubenswrapper[5120]: I0122 12:56:00.167699 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 22 12:56:00 crc kubenswrapper[5120]: I0122 12:56:00.167916 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g2chw\"" Jan 22 12:56:00 crc kubenswrapper[5120]: I0122 12:56:00.170248 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 22 12:56:00 crc kubenswrapper[5120]: I0122 12:56:00.279548 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l22p6\" (UniqueName: \"kubernetes.io/projected/e601162d-810b-4cd9-a558-08f4b76f1234-kube-api-access-l22p6\") pod \"auto-csr-approver-29484776-zrrxj\" (UID: \"e601162d-810b-4cd9-a558-08f4b76f1234\") " pod="openshift-infra/auto-csr-approver-29484776-zrrxj" Jan 22 12:56:00 crc kubenswrapper[5120]: I0122 12:56:00.380928 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l22p6\" (UniqueName: \"kubernetes.io/projected/e601162d-810b-4cd9-a558-08f4b76f1234-kube-api-access-l22p6\") pod \"auto-csr-approver-29484776-zrrxj\" (UID: \"e601162d-810b-4cd9-a558-08f4b76f1234\") " pod="openshift-infra/auto-csr-approver-29484776-zrrxj" Jan 22 12:56:00 crc kubenswrapper[5120]: I0122 12:56:00.427064 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l22p6\" (UniqueName: \"kubernetes.io/projected/e601162d-810b-4cd9-a558-08f4b76f1234-kube-api-access-l22p6\") pod \"auto-csr-approver-29484776-zrrxj\" (UID: \"e601162d-810b-4cd9-a558-08f4b76f1234\") " pod="openshift-infra/auto-csr-approver-29484776-zrrxj" Jan 22 12:56:00 crc kubenswrapper[5120]: I0122 12:56:00.488447 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484776-zrrxj" Jan 22 12:56:00 crc kubenswrapper[5120]: I0122 12:56:00.771807 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484776-zrrxj"] Jan 22 12:56:00 crc kubenswrapper[5120]: I0122 12:56:00.778074 5120 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 12:56:01 crc kubenswrapper[5120]: I0122 12:56:01.776725 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484776-zrrxj" event={"ID":"e601162d-810b-4cd9-a558-08f4b76f1234","Type":"ContainerStarted","Data":"b3a0e2ea55a0a8efc20b51dc01b267f89c5baae8b0090a70cfd3f5b54cbdf783"} Jan 22 12:56:02 crc kubenswrapper[5120]: I0122 12:56:02.788894 5120 generic.go:358] "Generic (PLEG): container finished" podID="e601162d-810b-4cd9-a558-08f4b76f1234" containerID="7512fad5ec10f0c7660abd2dd1ea5030ac807aecd713cb9dae496f30a411cff4" exitCode=0 Jan 22 12:56:02 crc kubenswrapper[5120]: I0122 12:56:02.788993 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484776-zrrxj" event={"ID":"e601162d-810b-4cd9-a558-08f4b76f1234","Type":"ContainerDied","Data":"7512fad5ec10f0c7660abd2dd1ea5030ac807aecd713cb9dae496f30a411cff4"} Jan 22 12:56:03 crc kubenswrapper[5120]: I0122 12:56:03.572268 5120 scope.go:117] "RemoveContainer" containerID="dbf558918fffbef59164dd4f2880112da5ee7c772edfd9eec91c378b2021782e" Jan 22 12:56:03 crc kubenswrapper[5120]: E0122 12:56:03.572952 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:56:04 crc kubenswrapper[5120]: I0122 12:56:04.194566 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484776-zrrxj" Jan 22 12:56:04 crc kubenswrapper[5120]: I0122 12:56:04.282522 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l22p6\" (UniqueName: \"kubernetes.io/projected/e601162d-810b-4cd9-a558-08f4b76f1234-kube-api-access-l22p6\") pod \"e601162d-810b-4cd9-a558-08f4b76f1234\" (UID: \"e601162d-810b-4cd9-a558-08f4b76f1234\") " Jan 22 12:56:04 crc kubenswrapper[5120]: I0122 12:56:04.300284 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e601162d-810b-4cd9-a558-08f4b76f1234-kube-api-access-l22p6" (OuterVolumeSpecName: "kube-api-access-l22p6") pod "e601162d-810b-4cd9-a558-08f4b76f1234" (UID: "e601162d-810b-4cd9-a558-08f4b76f1234"). InnerVolumeSpecName "kube-api-access-l22p6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:56:04 crc kubenswrapper[5120]: I0122 12:56:04.384290 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l22p6\" (UniqueName: \"kubernetes.io/projected/e601162d-810b-4cd9-a558-08f4b76f1234-kube-api-access-l22p6\") on node \"crc\" DevicePath \"\"" Jan 22 12:56:04 crc kubenswrapper[5120]: I0122 12:56:04.814327 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484776-zrrxj" event={"ID":"e601162d-810b-4cd9-a558-08f4b76f1234","Type":"ContainerDied","Data":"b3a0e2ea55a0a8efc20b51dc01b267f89c5baae8b0090a70cfd3f5b54cbdf783"} Jan 22 12:56:04 crc kubenswrapper[5120]: I0122 12:56:04.814379 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3a0e2ea55a0a8efc20b51dc01b267f89c5baae8b0090a70cfd3f5b54cbdf783" Jan 22 12:56:04 crc kubenswrapper[5120]: I0122 12:56:04.814463 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484776-zrrxj" Jan 22 12:56:05 crc kubenswrapper[5120]: I0122 12:56:05.277776 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29484770-td669"] Jan 22 12:56:05 crc kubenswrapper[5120]: I0122 12:56:05.290507 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29484770-td669"] Jan 22 12:56:05 crc kubenswrapper[5120]: I0122 12:56:05.588402 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="730d9559-f767-44f0-9346-cfba60c8f1b5" path="/var/lib/kubelet/pods/730d9559-f767-44f0-9346-cfba60c8f1b5/volumes" Jan 22 12:56:17 crc kubenswrapper[5120]: I0122 12:56:17.572481 5120 scope.go:117] "RemoveContainer" containerID="dbf558918fffbef59164dd4f2880112da5ee7c772edfd9eec91c378b2021782e" Jan 22 12:56:17 crc kubenswrapper[5120]: E0122 12:56:17.574559 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:56:30 crc kubenswrapper[5120]: I0122 12:56:30.574791 5120 scope.go:117] "RemoveContainer" containerID="dbf558918fffbef59164dd4f2880112da5ee7c772edfd9eec91c378b2021782e" Jan 22 12:56:30 crc kubenswrapper[5120]: E0122 12:56:30.575945 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:56:44 crc kubenswrapper[5120]: I0122 12:56:44.572437 5120 scope.go:117] "RemoveContainer" containerID="dbf558918fffbef59164dd4f2880112da5ee7c772edfd9eec91c378b2021782e" Jan 22 12:56:44 crc kubenswrapper[5120]: E0122 12:56:44.573521 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:56:55 crc kubenswrapper[5120]: I0122 12:56:55.584068 5120 scope.go:117] "RemoveContainer" containerID="dbf558918fffbef59164dd4f2880112da5ee7c772edfd9eec91c378b2021782e" Jan 22 12:56:55 crc kubenswrapper[5120]: E0122 12:56:55.585028 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:57:05 crc kubenswrapper[5120]: I0122 12:57:05.520681 5120 scope.go:117] "RemoveContainer" containerID="727bb28f7a024f28e2f883ea6ba608737fc5ddb620fdace8b333e8edb2713483" Jan 22 12:57:06 crc kubenswrapper[5120]: I0122 12:57:06.572678 5120 scope.go:117] "RemoveContainer" containerID="dbf558918fffbef59164dd4f2880112da5ee7c772edfd9eec91c378b2021782e" Jan 22 12:57:06 crc kubenswrapper[5120]: E0122 12:57:06.573197 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:57:17 crc kubenswrapper[5120]: I0122 12:57:17.572387 5120 scope.go:117] "RemoveContainer" containerID="dbf558918fffbef59164dd4f2880112da5ee7c772edfd9eec91c378b2021782e" Jan 22 12:57:17 crc kubenswrapper[5120]: E0122 12:57:17.573507 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:57:29 crc kubenswrapper[5120]: I0122 12:57:29.898328 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-n6jng"] Jan 22 12:57:29 crc kubenswrapper[5120]: I0122 12:57:29.903381 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e601162d-810b-4cd9-a558-08f4b76f1234" containerName="oc" Jan 22 12:57:29 crc kubenswrapper[5120]: I0122 12:57:29.903449 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="e601162d-810b-4cd9-a558-08f4b76f1234" containerName="oc" Jan 22 12:57:29 crc kubenswrapper[5120]: I0122 12:57:29.903857 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="e601162d-810b-4cd9-a558-08f4b76f1234" containerName="oc" Jan 22 12:57:29 crc kubenswrapper[5120]: I0122 12:57:29.917330 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n6jng"] Jan 22 12:57:29 crc kubenswrapper[5120]: I0122 12:57:29.917502 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n6jng" Jan 22 12:57:30 crc kubenswrapper[5120]: I0122 12:57:30.003388 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81789982-6ef2-4e7d-ab11-33380f68aad4-catalog-content\") pod \"community-operators-n6jng\" (UID: \"81789982-6ef2-4e7d-ab11-33380f68aad4\") " pod="openshift-marketplace/community-operators-n6jng" Jan 22 12:57:30 crc kubenswrapper[5120]: I0122 12:57:30.003709 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vkbh\" (UniqueName: \"kubernetes.io/projected/81789982-6ef2-4e7d-ab11-33380f68aad4-kube-api-access-8vkbh\") pod \"community-operators-n6jng\" (UID: \"81789982-6ef2-4e7d-ab11-33380f68aad4\") " pod="openshift-marketplace/community-operators-n6jng" Jan 22 12:57:30 crc kubenswrapper[5120]: I0122 12:57:30.003814 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81789982-6ef2-4e7d-ab11-33380f68aad4-utilities\") pod \"community-operators-n6jng\" (UID: \"81789982-6ef2-4e7d-ab11-33380f68aad4\") " pod="openshift-marketplace/community-operators-n6jng" Jan 22 12:57:30 crc kubenswrapper[5120]: I0122 12:57:30.105551 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81789982-6ef2-4e7d-ab11-33380f68aad4-utilities\") pod \"community-operators-n6jng\" (UID: \"81789982-6ef2-4e7d-ab11-33380f68aad4\") " pod="openshift-marketplace/community-operators-n6jng" Jan 22 12:57:30 crc kubenswrapper[5120]: I0122 12:57:30.105603 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81789982-6ef2-4e7d-ab11-33380f68aad4-catalog-content\") pod \"community-operators-n6jng\" (UID: \"81789982-6ef2-4e7d-ab11-33380f68aad4\") " pod="openshift-marketplace/community-operators-n6jng" Jan 22 12:57:30 crc kubenswrapper[5120]: I0122 12:57:30.105637 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8vkbh\" (UniqueName: \"kubernetes.io/projected/81789982-6ef2-4e7d-ab11-33380f68aad4-kube-api-access-8vkbh\") pod \"community-operators-n6jng\" (UID: \"81789982-6ef2-4e7d-ab11-33380f68aad4\") " pod="openshift-marketplace/community-operators-n6jng" Jan 22 12:57:30 crc kubenswrapper[5120]: I0122 12:57:30.106549 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81789982-6ef2-4e7d-ab11-33380f68aad4-utilities\") pod \"community-operators-n6jng\" (UID: \"81789982-6ef2-4e7d-ab11-33380f68aad4\") " pod="openshift-marketplace/community-operators-n6jng" Jan 22 12:57:30 crc kubenswrapper[5120]: I0122 12:57:30.106841 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81789982-6ef2-4e7d-ab11-33380f68aad4-catalog-content\") pod \"community-operators-n6jng\" (UID: \"81789982-6ef2-4e7d-ab11-33380f68aad4\") " pod="openshift-marketplace/community-operators-n6jng" Jan 22 12:57:30 crc kubenswrapper[5120]: I0122 12:57:30.135701 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vkbh\" (UniqueName: \"kubernetes.io/projected/81789982-6ef2-4e7d-ab11-33380f68aad4-kube-api-access-8vkbh\") pod \"community-operators-n6jng\" (UID: \"81789982-6ef2-4e7d-ab11-33380f68aad4\") " pod="openshift-marketplace/community-operators-n6jng" Jan 22 12:57:30 crc kubenswrapper[5120]: I0122 12:57:30.249851 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n6jng" Jan 22 12:57:30 crc kubenswrapper[5120]: I0122 12:57:30.735702 5120 scope.go:117] "RemoveContainer" containerID="dbf558918fffbef59164dd4f2880112da5ee7c772edfd9eec91c378b2021782e" Jan 22 12:57:30 crc kubenswrapper[5120]: E0122 12:57:30.736434 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:57:30 crc kubenswrapper[5120]: I0122 12:57:30.821480 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n6jng"] Jan 22 12:57:30 crc kubenswrapper[5120]: W0122 12:57:30.827091 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81789982_6ef2_4e7d_ab11_33380f68aad4.slice/crio-5a27ea58014f7d2270e7562465b4b2a4c0c1318b0597d86739d5af21de484a73 WatchSource:0}: Error finding container 5a27ea58014f7d2270e7562465b4b2a4c0c1318b0597d86739d5af21de484a73: Status 404 returned error can't find the container with id 5a27ea58014f7d2270e7562465b4b2a4c0c1318b0597d86739d5af21de484a73 Jan 22 12:57:31 crc kubenswrapper[5120]: I0122 12:57:31.753401 5120 generic.go:358] "Generic (PLEG): container finished" podID="81789982-6ef2-4e7d-ab11-33380f68aad4" containerID="d11b1c33635db296f3ff86c7092cf2072594be7992b8de2d62c687c16eab374e" exitCode=0 Jan 22 12:57:31 crc kubenswrapper[5120]: I0122 12:57:31.753476 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n6jng" event={"ID":"81789982-6ef2-4e7d-ab11-33380f68aad4","Type":"ContainerDied","Data":"d11b1c33635db296f3ff86c7092cf2072594be7992b8de2d62c687c16eab374e"} Jan 22 12:57:31 crc kubenswrapper[5120]: I0122 12:57:31.753894 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n6jng" event={"ID":"81789982-6ef2-4e7d-ab11-33380f68aad4","Type":"ContainerStarted","Data":"5a27ea58014f7d2270e7562465b4b2a4c0c1318b0597d86739d5af21de484a73"} Jan 22 12:57:33 crc kubenswrapper[5120]: I0122 12:57:33.771437 5120 generic.go:358] "Generic (PLEG): container finished" podID="81789982-6ef2-4e7d-ab11-33380f68aad4" containerID="aa18c7b92186cce5d553621916e8d46cdbd7e8fab98ca62507a976ffa85e7597" exitCode=0 Jan 22 12:57:33 crc kubenswrapper[5120]: I0122 12:57:33.771540 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n6jng" event={"ID":"81789982-6ef2-4e7d-ab11-33380f68aad4","Type":"ContainerDied","Data":"aa18c7b92186cce5d553621916e8d46cdbd7e8fab98ca62507a976ffa85e7597"} Jan 22 12:57:34 crc kubenswrapper[5120]: I0122 12:57:34.784161 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n6jng" event={"ID":"81789982-6ef2-4e7d-ab11-33380f68aad4","Type":"ContainerStarted","Data":"174676accbb49bbfb77b4a5641602a02c2948a360f0936c6dfd07cff74411846"} Jan 22 12:57:34 crc kubenswrapper[5120]: I0122 12:57:34.813598 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-n6jng" podStartSLOduration=4.76418549 podStartE2EDuration="5.813580869s" podCreationTimestamp="2026-01-22 12:57:29 +0000 UTC" firstStartedPulling="2026-01-22 12:57:31.754320142 +0000 UTC m=+4186.498268483" lastFinishedPulling="2026-01-22 12:57:32.803715521 +0000 UTC m=+4187.547663862" observedRunningTime="2026-01-22 12:57:34.810837135 +0000 UTC m=+4189.554785516" watchObservedRunningTime="2026-01-22 12:57:34.813580869 +0000 UTC m=+4189.557529210" Jan 22 12:57:40 crc kubenswrapper[5120]: I0122 12:57:40.251146 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-n6jng" Jan 22 12:57:40 crc kubenswrapper[5120]: I0122 12:57:40.252041 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-n6jng" Jan 22 12:57:40 crc kubenswrapper[5120]: I0122 12:57:40.320710 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-n6jng" Jan 22 12:57:40 crc kubenswrapper[5120]: I0122 12:57:40.894426 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-n6jng" Jan 22 12:57:41 crc kubenswrapper[5120]: I0122 12:57:41.948338 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n6jng"] Jan 22 12:57:43 crc kubenswrapper[5120]: I0122 12:57:43.876726 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-n6jng" podUID="81789982-6ef2-4e7d-ab11-33380f68aad4" containerName="registry-server" containerID="cri-o://174676accbb49bbfb77b4a5641602a02c2948a360f0936c6dfd07cff74411846" gracePeriod=2 Jan 22 12:57:44 crc kubenswrapper[5120]: I0122 12:57:44.573481 5120 scope.go:117] "RemoveContainer" containerID="dbf558918fffbef59164dd4f2880112da5ee7c772edfd9eec91c378b2021782e" Jan 22 12:57:44 crc kubenswrapper[5120]: E0122 12:57:44.573864 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:57:44 crc kubenswrapper[5120]: I0122 12:57:44.885724 5120 generic.go:358] "Generic (PLEG): container finished" podID="81789982-6ef2-4e7d-ab11-33380f68aad4" containerID="174676accbb49bbfb77b4a5641602a02c2948a360f0936c6dfd07cff74411846" exitCode=0 Jan 22 12:57:44 crc kubenswrapper[5120]: I0122 12:57:44.885823 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n6jng" event={"ID":"81789982-6ef2-4e7d-ab11-33380f68aad4","Type":"ContainerDied","Data":"174676accbb49bbfb77b4a5641602a02c2948a360f0936c6dfd07cff74411846"} Jan 22 12:57:44 crc kubenswrapper[5120]: I0122 12:57:44.886272 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n6jng" event={"ID":"81789982-6ef2-4e7d-ab11-33380f68aad4","Type":"ContainerDied","Data":"5a27ea58014f7d2270e7562465b4b2a4c0c1318b0597d86739d5af21de484a73"} Jan 22 12:57:44 crc kubenswrapper[5120]: I0122 12:57:44.886300 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a27ea58014f7d2270e7562465b4b2a4c0c1318b0597d86739d5af21de484a73" Jan 22 12:57:44 crc kubenswrapper[5120]: I0122 12:57:44.906087 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n6jng" Jan 22 12:57:44 crc kubenswrapper[5120]: I0122 12:57:44.914173 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81789982-6ef2-4e7d-ab11-33380f68aad4-catalog-content\") pod \"81789982-6ef2-4e7d-ab11-33380f68aad4\" (UID: \"81789982-6ef2-4e7d-ab11-33380f68aad4\") " Jan 22 12:57:44 crc kubenswrapper[5120]: I0122 12:57:44.914336 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vkbh\" (UniqueName: \"kubernetes.io/projected/81789982-6ef2-4e7d-ab11-33380f68aad4-kube-api-access-8vkbh\") pod \"81789982-6ef2-4e7d-ab11-33380f68aad4\" (UID: \"81789982-6ef2-4e7d-ab11-33380f68aad4\") " Jan 22 12:57:44 crc kubenswrapper[5120]: I0122 12:57:44.914370 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81789982-6ef2-4e7d-ab11-33380f68aad4-utilities\") pod \"81789982-6ef2-4e7d-ab11-33380f68aad4\" (UID: \"81789982-6ef2-4e7d-ab11-33380f68aad4\") " Jan 22 12:57:44 crc kubenswrapper[5120]: I0122 12:57:44.915405 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81789982-6ef2-4e7d-ab11-33380f68aad4-utilities" (OuterVolumeSpecName: "utilities") pod "81789982-6ef2-4e7d-ab11-33380f68aad4" (UID: "81789982-6ef2-4e7d-ab11-33380f68aad4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:57:44 crc kubenswrapper[5120]: I0122 12:57:44.920662 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81789982-6ef2-4e7d-ab11-33380f68aad4-kube-api-access-8vkbh" (OuterVolumeSpecName: "kube-api-access-8vkbh") pod "81789982-6ef2-4e7d-ab11-33380f68aad4" (UID: "81789982-6ef2-4e7d-ab11-33380f68aad4"). InnerVolumeSpecName "kube-api-access-8vkbh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:57:44 crc kubenswrapper[5120]: I0122 12:57:44.963223 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81789982-6ef2-4e7d-ab11-33380f68aad4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "81789982-6ef2-4e7d-ab11-33380f68aad4" (UID: "81789982-6ef2-4e7d-ab11-33380f68aad4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:57:45 crc kubenswrapper[5120]: I0122 12:57:45.015335 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81789982-6ef2-4e7d-ab11-33380f68aad4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 12:57:45 crc kubenswrapper[5120]: I0122 12:57:45.015375 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8vkbh\" (UniqueName: \"kubernetes.io/projected/81789982-6ef2-4e7d-ab11-33380f68aad4-kube-api-access-8vkbh\") on node \"crc\" DevicePath \"\"" Jan 22 12:57:45 crc kubenswrapper[5120]: I0122 12:57:45.015387 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81789982-6ef2-4e7d-ab11-33380f68aad4-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 12:57:45 crc kubenswrapper[5120]: I0122 12:57:45.895169 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n6jng" Jan 22 12:57:45 crc kubenswrapper[5120]: I0122 12:57:45.949137 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n6jng"] Jan 22 12:57:45 crc kubenswrapper[5120]: I0122 12:57:45.968809 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-n6jng"] Jan 22 12:57:47 crc kubenswrapper[5120]: I0122 12:57:47.589736 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81789982-6ef2-4e7d-ab11-33380f68aad4" path="/var/lib/kubelet/pods/81789982-6ef2-4e7d-ab11-33380f68aad4/volumes" Jan 22 12:57:52 crc kubenswrapper[5120]: I0122 12:57:52.501539 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4lzht_67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087/kube-multus/0.log" Jan 22 12:57:52 crc kubenswrapper[5120]: I0122 12:57:52.501587 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4lzht_67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087/kube-multus/0.log" Jan 22 12:57:52 crc kubenswrapper[5120]: I0122 12:57:52.512399 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 22 12:57:52 crc kubenswrapper[5120]: I0122 12:57:52.512665 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 22 12:57:55 crc kubenswrapper[5120]: E0122 12:57:55.006083 5120 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81789982_6ef2_4e7d_ab11_33380f68aad4.slice/crio-5a27ea58014f7d2270e7562465b4b2a4c0c1318b0597d86739d5af21de484a73\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81789982_6ef2_4e7d_ab11_33380f68aad4.slice\": RecentStats: unable to find data in memory cache]" Jan 22 12:57:58 crc kubenswrapper[5120]: I0122 12:57:58.572680 5120 scope.go:117] "RemoveContainer" containerID="dbf558918fffbef59164dd4f2880112da5ee7c772edfd9eec91c378b2021782e" Jan 22 12:57:58 crc kubenswrapper[5120]: E0122 12:57:58.573693 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:58:00 crc kubenswrapper[5120]: I0122 12:58:00.150316 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29484778-mmgrb"] Jan 22 12:58:00 crc kubenswrapper[5120]: I0122 12:58:00.151939 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="81789982-6ef2-4e7d-ab11-33380f68aad4" containerName="extract-content" Jan 22 12:58:00 crc kubenswrapper[5120]: I0122 12:58:00.152015 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="81789982-6ef2-4e7d-ab11-33380f68aad4" containerName="extract-content" Jan 22 12:58:00 crc kubenswrapper[5120]: I0122 12:58:00.152041 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="81789982-6ef2-4e7d-ab11-33380f68aad4" containerName="registry-server" Jan 22 12:58:00 crc kubenswrapper[5120]: I0122 12:58:00.152052 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="81789982-6ef2-4e7d-ab11-33380f68aad4" containerName="registry-server" Jan 22 12:58:00 crc kubenswrapper[5120]: I0122 12:58:00.152076 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="81789982-6ef2-4e7d-ab11-33380f68aad4" containerName="extract-utilities" Jan 22 12:58:00 crc kubenswrapper[5120]: I0122 12:58:00.152086 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="81789982-6ef2-4e7d-ab11-33380f68aad4" containerName="extract-utilities" Jan 22 12:58:00 crc kubenswrapper[5120]: I0122 12:58:00.152303 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="81789982-6ef2-4e7d-ab11-33380f68aad4" containerName="registry-server" Jan 22 12:58:00 crc kubenswrapper[5120]: I0122 12:58:00.161523 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484778-mmgrb"] Jan 22 12:58:00 crc kubenswrapper[5120]: I0122 12:58:00.161714 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484778-mmgrb" Jan 22 12:58:00 crc kubenswrapper[5120]: I0122 12:58:00.176575 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 22 12:58:00 crc kubenswrapper[5120]: I0122 12:58:00.176879 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 22 12:58:00 crc kubenswrapper[5120]: I0122 12:58:00.176921 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g2chw\"" Jan 22 12:58:00 crc kubenswrapper[5120]: I0122 12:58:00.305407 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjjnx\" (UniqueName: \"kubernetes.io/projected/bf95f5c6-016d-4a27-b836-07355b8fe40c-kube-api-access-sjjnx\") pod \"auto-csr-approver-29484778-mmgrb\" (UID: \"bf95f5c6-016d-4a27-b836-07355b8fe40c\") " pod="openshift-infra/auto-csr-approver-29484778-mmgrb" Jan 22 12:58:00 crc kubenswrapper[5120]: I0122 12:58:00.407403 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sjjnx\" (UniqueName: \"kubernetes.io/projected/bf95f5c6-016d-4a27-b836-07355b8fe40c-kube-api-access-sjjnx\") pod \"auto-csr-approver-29484778-mmgrb\" (UID: \"bf95f5c6-016d-4a27-b836-07355b8fe40c\") " pod="openshift-infra/auto-csr-approver-29484778-mmgrb" Jan 22 12:58:00 crc kubenswrapper[5120]: I0122 12:58:00.438536 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjjnx\" (UniqueName: \"kubernetes.io/projected/bf95f5c6-016d-4a27-b836-07355b8fe40c-kube-api-access-sjjnx\") pod \"auto-csr-approver-29484778-mmgrb\" (UID: \"bf95f5c6-016d-4a27-b836-07355b8fe40c\") " pod="openshift-infra/auto-csr-approver-29484778-mmgrb" Jan 22 12:58:00 crc kubenswrapper[5120]: I0122 12:58:00.496314 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484778-mmgrb" Jan 22 12:58:00 crc kubenswrapper[5120]: I0122 12:58:00.772841 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484778-mmgrb"] Jan 22 12:58:01 crc kubenswrapper[5120]: I0122 12:58:01.044020 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484778-mmgrb" event={"ID":"bf95f5c6-016d-4a27-b836-07355b8fe40c","Type":"ContainerStarted","Data":"94a7aa94b071baad1c3ed51dc4ebffd6aca5380b8a5f5f92f4cafc553d9ddfcc"} Jan 22 12:58:02 crc kubenswrapper[5120]: I0122 12:58:02.053439 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484778-mmgrb" event={"ID":"bf95f5c6-016d-4a27-b836-07355b8fe40c","Type":"ContainerStarted","Data":"4af10899900eaff979fb0e1ea3a74a61d71ea4e0ba8e793e1134b58112a66e1e"} Jan 22 12:58:02 crc kubenswrapper[5120]: I0122 12:58:02.071907 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29484778-mmgrb" podStartSLOduration=1.1845751 podStartE2EDuration="2.071863026s" podCreationTimestamp="2026-01-22 12:58:00 +0000 UTC" firstStartedPulling="2026-01-22 12:58:00.773807001 +0000 UTC m=+4215.517755362" lastFinishedPulling="2026-01-22 12:58:01.661094937 +0000 UTC m=+4216.405043288" observedRunningTime="2026-01-22 12:58:02.067614627 +0000 UTC m=+4216.811562998" watchObservedRunningTime="2026-01-22 12:58:02.071863026 +0000 UTC m=+4216.815811387" Jan 22 12:58:03 crc kubenswrapper[5120]: I0122 12:58:03.064351 5120 generic.go:358] "Generic (PLEG): container finished" podID="bf95f5c6-016d-4a27-b836-07355b8fe40c" containerID="4af10899900eaff979fb0e1ea3a74a61d71ea4e0ba8e793e1134b58112a66e1e" exitCode=0 Jan 22 12:58:03 crc kubenswrapper[5120]: I0122 12:58:03.064479 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484778-mmgrb" event={"ID":"bf95f5c6-016d-4a27-b836-07355b8fe40c","Type":"ContainerDied","Data":"4af10899900eaff979fb0e1ea3a74a61d71ea4e0ba8e793e1134b58112a66e1e"} Jan 22 12:58:04 crc kubenswrapper[5120]: I0122 12:58:04.437221 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484778-mmgrb" Jan 22 12:58:04 crc kubenswrapper[5120]: I0122 12:58:04.585455 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjjnx\" (UniqueName: \"kubernetes.io/projected/bf95f5c6-016d-4a27-b836-07355b8fe40c-kube-api-access-sjjnx\") pod \"bf95f5c6-016d-4a27-b836-07355b8fe40c\" (UID: \"bf95f5c6-016d-4a27-b836-07355b8fe40c\") " Jan 22 12:58:04 crc kubenswrapper[5120]: I0122 12:58:04.595217 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf95f5c6-016d-4a27-b836-07355b8fe40c-kube-api-access-sjjnx" (OuterVolumeSpecName: "kube-api-access-sjjnx") pod "bf95f5c6-016d-4a27-b836-07355b8fe40c" (UID: "bf95f5c6-016d-4a27-b836-07355b8fe40c"). InnerVolumeSpecName "kube-api-access-sjjnx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:58:04 crc kubenswrapper[5120]: I0122 12:58:04.688546 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sjjnx\" (UniqueName: \"kubernetes.io/projected/bf95f5c6-016d-4a27-b836-07355b8fe40c-kube-api-access-sjjnx\") on node \"crc\" DevicePath \"\"" Jan 22 12:58:05 crc kubenswrapper[5120]: I0122 12:58:05.084460 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484778-mmgrb" event={"ID":"bf95f5c6-016d-4a27-b836-07355b8fe40c","Type":"ContainerDied","Data":"94a7aa94b071baad1c3ed51dc4ebffd6aca5380b8a5f5f92f4cafc553d9ddfcc"} Jan 22 12:58:05 crc kubenswrapper[5120]: I0122 12:58:05.084523 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94a7aa94b071baad1c3ed51dc4ebffd6aca5380b8a5f5f92f4cafc553d9ddfcc" Jan 22 12:58:05 crc kubenswrapper[5120]: I0122 12:58:05.084612 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484778-mmgrb" Jan 22 12:58:05 crc kubenswrapper[5120]: I0122 12:58:05.132399 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29484772-rwp4t"] Jan 22 12:58:05 crc kubenswrapper[5120]: I0122 12:58:05.141717 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29484772-rwp4t"] Jan 22 12:58:05 crc kubenswrapper[5120]: E0122 12:58:05.218583 5120 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81789982_6ef2_4e7d_ab11_33380f68aad4.slice/crio-5a27ea58014f7d2270e7562465b4b2a4c0c1318b0597d86739d5af21de484a73\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81789982_6ef2_4e7d_ab11_33380f68aad4.slice\": RecentStats: unable to find data in memory cache]" Jan 22 12:58:05 crc kubenswrapper[5120]: I0122 12:58:05.596934 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="813b78c4-6644-444f-baa4-af92c9a1bfd0" path="/var/lib/kubelet/pods/813b78c4-6644-444f-baa4-af92c9a1bfd0/volumes" Jan 22 12:58:05 crc kubenswrapper[5120]: I0122 12:58:05.664916 5120 scope.go:117] "RemoveContainer" containerID="ff8af05f7b27c4b094ab8e8f34a856e723d09850f96dc8e0d652385ae56780a8" Jan 22 12:58:12 crc kubenswrapper[5120]: I0122 12:58:12.572065 5120 scope.go:117] "RemoveContainer" containerID="dbf558918fffbef59164dd4f2880112da5ee7c772edfd9eec91c378b2021782e" Jan 22 12:58:13 crc kubenswrapper[5120]: I0122 12:58:13.168108 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" event={"ID":"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9","Type":"ContainerStarted","Data":"564566f88543c243d4bce411a2a81cdc20ab4dfa6edf69e38bfddd2aaa71b1ed"} Jan 22 12:58:15 crc kubenswrapper[5120]: E0122 12:58:15.407630 5120 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81789982_6ef2_4e7d_ab11_33380f68aad4.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81789982_6ef2_4e7d_ab11_33380f68aad4.slice/crio-5a27ea58014f7d2270e7562465b4b2a4c0c1318b0597d86739d5af21de484a73\": RecentStats: unable to find data in memory cache]" Jan 22 12:58:25 crc kubenswrapper[5120]: E0122 12:58:25.627625 5120 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81789982_6ef2_4e7d_ab11_33380f68aad4.slice/crio-5a27ea58014f7d2270e7562465b4b2a4c0c1318b0597d86739d5af21de484a73\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81789982_6ef2_4e7d_ab11_33380f68aad4.slice\": RecentStats: unable to find data in memory cache]" Jan 22 12:58:35 crc kubenswrapper[5120]: E0122 12:58:35.816915 5120 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81789982_6ef2_4e7d_ab11_33380f68aad4.slice/crio-5a27ea58014f7d2270e7562465b4b2a4c0c1318b0597d86739d5af21de484a73\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81789982_6ef2_4e7d_ab11_33380f68aad4.slice\": RecentStats: unable to find data in memory cache]" Jan 22 13:00:00 crc kubenswrapper[5120]: I0122 13:00:00.138679 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29484780-r76g8"] Jan 22 13:00:00 crc kubenswrapper[5120]: I0122 13:00:00.140793 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bf95f5c6-016d-4a27-b836-07355b8fe40c" containerName="oc" Jan 22 13:00:00 crc kubenswrapper[5120]: I0122 13:00:00.140822 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf95f5c6-016d-4a27-b836-07355b8fe40c" containerName="oc" Jan 22 13:00:00 crc kubenswrapper[5120]: I0122 13:00:00.141065 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="bf95f5c6-016d-4a27-b836-07355b8fe40c" containerName="oc" Jan 22 13:00:00 crc kubenswrapper[5120]: I0122 13:00:00.157211 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484780-9vhhw"] Jan 22 13:00:00 crc kubenswrapper[5120]: I0122 13:00:00.157402 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484780-r76g8" Jan 22 13:00:00 crc kubenswrapper[5120]: I0122 13:00:00.159375 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g2chw\"" Jan 22 13:00:00 crc kubenswrapper[5120]: I0122 13:00:00.160104 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 22 13:00:00 crc kubenswrapper[5120]: I0122 13:00:00.160682 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 22 13:00:00 crc kubenswrapper[5120]: I0122 13:00:00.165232 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484780-r76g8"] Jan 22 13:00:00 crc kubenswrapper[5120]: I0122 13:00:00.165277 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484780-9vhhw"] Jan 22 13:00:00 crc kubenswrapper[5120]: I0122 13:00:00.165413 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484780-9vhhw" Jan 22 13:00:00 crc kubenswrapper[5120]: I0122 13:00:00.167446 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 22 13:00:00 crc kubenswrapper[5120]: I0122 13:00:00.167682 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 22 13:00:00 crc kubenswrapper[5120]: I0122 13:00:00.251722 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a0e2a0ec-867a-47b2-b6f5-7586c07979e8-config-volume\") pod \"collect-profiles-29484780-9vhhw\" (UID: \"a0e2a0ec-867a-47b2-b6f5-7586c07979e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484780-9vhhw" Jan 22 13:00:00 crc kubenswrapper[5120]: I0122 13:00:00.251787 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7258\" (UniqueName: \"kubernetes.io/projected/a0e2a0ec-867a-47b2-b6f5-7586c07979e8-kube-api-access-h7258\") pod \"collect-profiles-29484780-9vhhw\" (UID: \"a0e2a0ec-867a-47b2-b6f5-7586c07979e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484780-9vhhw" Jan 22 13:00:00 crc kubenswrapper[5120]: I0122 13:00:00.251903 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgzlp\" (UniqueName: \"kubernetes.io/projected/7e0c82bd-3880-4a7b-98d0-751c23215e35-kube-api-access-qgzlp\") pod \"auto-csr-approver-29484780-r76g8\" (UID: \"7e0c82bd-3880-4a7b-98d0-751c23215e35\") " pod="openshift-infra/auto-csr-approver-29484780-r76g8" Jan 22 13:00:00 crc kubenswrapper[5120]: I0122 13:00:00.252049 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a0e2a0ec-867a-47b2-b6f5-7586c07979e8-secret-volume\") pod \"collect-profiles-29484780-9vhhw\" (UID: \"a0e2a0ec-867a-47b2-b6f5-7586c07979e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484780-9vhhw" Jan 22 13:00:00 crc kubenswrapper[5120]: I0122 13:00:00.353622 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-h7258\" (UniqueName: \"kubernetes.io/projected/a0e2a0ec-867a-47b2-b6f5-7586c07979e8-kube-api-access-h7258\") pod \"collect-profiles-29484780-9vhhw\" (UID: \"a0e2a0ec-867a-47b2-b6f5-7586c07979e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484780-9vhhw" Jan 22 13:00:00 crc kubenswrapper[5120]: I0122 13:00:00.353788 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qgzlp\" (UniqueName: \"kubernetes.io/projected/7e0c82bd-3880-4a7b-98d0-751c23215e35-kube-api-access-qgzlp\") pod \"auto-csr-approver-29484780-r76g8\" (UID: \"7e0c82bd-3880-4a7b-98d0-751c23215e35\") " pod="openshift-infra/auto-csr-approver-29484780-r76g8" Jan 22 13:00:00 crc kubenswrapper[5120]: I0122 13:00:00.353846 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a0e2a0ec-867a-47b2-b6f5-7586c07979e8-secret-volume\") pod \"collect-profiles-29484780-9vhhw\" (UID: \"a0e2a0ec-867a-47b2-b6f5-7586c07979e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484780-9vhhw" Jan 22 13:00:00 crc kubenswrapper[5120]: I0122 13:00:00.353888 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a0e2a0ec-867a-47b2-b6f5-7586c07979e8-config-volume\") pod \"collect-profiles-29484780-9vhhw\" (UID: \"a0e2a0ec-867a-47b2-b6f5-7586c07979e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484780-9vhhw" Jan 22 13:00:00 crc kubenswrapper[5120]: I0122 13:00:00.356088 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a0e2a0ec-867a-47b2-b6f5-7586c07979e8-config-volume\") pod \"collect-profiles-29484780-9vhhw\" (UID: \"a0e2a0ec-867a-47b2-b6f5-7586c07979e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484780-9vhhw" Jan 22 13:00:00 crc kubenswrapper[5120]: I0122 13:00:00.374226 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a0e2a0ec-867a-47b2-b6f5-7586c07979e8-secret-volume\") pod \"collect-profiles-29484780-9vhhw\" (UID: \"a0e2a0ec-867a-47b2-b6f5-7586c07979e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484780-9vhhw" Jan 22 13:00:00 crc kubenswrapper[5120]: I0122 13:00:00.376503 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7258\" (UniqueName: \"kubernetes.io/projected/a0e2a0ec-867a-47b2-b6f5-7586c07979e8-kube-api-access-h7258\") pod \"collect-profiles-29484780-9vhhw\" (UID: \"a0e2a0ec-867a-47b2-b6f5-7586c07979e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484780-9vhhw" Jan 22 13:00:00 crc kubenswrapper[5120]: I0122 13:00:00.376623 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgzlp\" (UniqueName: \"kubernetes.io/projected/7e0c82bd-3880-4a7b-98d0-751c23215e35-kube-api-access-qgzlp\") pod \"auto-csr-approver-29484780-r76g8\" (UID: \"7e0c82bd-3880-4a7b-98d0-751c23215e35\") " pod="openshift-infra/auto-csr-approver-29484780-r76g8" Jan 22 13:00:00 crc kubenswrapper[5120]: I0122 13:00:00.492157 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484780-r76g8" Jan 22 13:00:00 crc kubenswrapper[5120]: I0122 13:00:00.501947 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484780-9vhhw" Jan 22 13:00:00 crc kubenswrapper[5120]: I0122 13:00:00.783342 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484780-9vhhw"] Jan 22 13:00:01 crc kubenswrapper[5120]: W0122 13:00:01.055939 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7e0c82bd_3880_4a7b_98d0_751c23215e35.slice/crio-42a76bb118c34a83e7f77d4a0df553f87a5e5d04453bfd7d9e61b77a34444122 WatchSource:0}: Error finding container 42a76bb118c34a83e7f77d4a0df553f87a5e5d04453bfd7d9e61b77a34444122: Status 404 returned error can't find the container with id 42a76bb118c34a83e7f77d4a0df553f87a5e5d04453bfd7d9e61b77a34444122 Jan 22 13:00:01 crc kubenswrapper[5120]: I0122 13:00:01.057857 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484780-r76g8"] Jan 22 13:00:01 crc kubenswrapper[5120]: I0122 13:00:01.541683 5120 generic.go:358] "Generic (PLEG): container finished" podID="a0e2a0ec-867a-47b2-b6f5-7586c07979e8" containerID="95f8510cf745c21585132dbafc647672d844d28bb93b1ec91530a1a9f1b4139f" exitCode=0 Jan 22 13:00:01 crc kubenswrapper[5120]: I0122 13:00:01.541744 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484780-9vhhw" event={"ID":"a0e2a0ec-867a-47b2-b6f5-7586c07979e8","Type":"ContainerDied","Data":"95f8510cf745c21585132dbafc647672d844d28bb93b1ec91530a1a9f1b4139f"} Jan 22 13:00:01 crc kubenswrapper[5120]: I0122 13:00:01.541817 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484780-9vhhw" event={"ID":"a0e2a0ec-867a-47b2-b6f5-7586c07979e8","Type":"ContainerStarted","Data":"d3aa07de6324edacad36defa170f4a5fe9f6dddd7a9acb0e6d6dbea04e0b82e3"} Jan 22 13:00:01 crc kubenswrapper[5120]: I0122 13:00:01.543768 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484780-r76g8" event={"ID":"7e0c82bd-3880-4a7b-98d0-751c23215e35","Type":"ContainerStarted","Data":"42a76bb118c34a83e7f77d4a0df553f87a5e5d04453bfd7d9e61b77a34444122"} Jan 22 13:00:02 crc kubenswrapper[5120]: I0122 13:00:02.797970 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484780-9vhhw" Jan 22 13:00:02 crc kubenswrapper[5120]: I0122 13:00:02.904612 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7258\" (UniqueName: \"kubernetes.io/projected/a0e2a0ec-867a-47b2-b6f5-7586c07979e8-kube-api-access-h7258\") pod \"a0e2a0ec-867a-47b2-b6f5-7586c07979e8\" (UID: \"a0e2a0ec-867a-47b2-b6f5-7586c07979e8\") " Jan 22 13:00:02 crc kubenswrapper[5120]: I0122 13:00:02.904746 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a0e2a0ec-867a-47b2-b6f5-7586c07979e8-secret-volume\") pod \"a0e2a0ec-867a-47b2-b6f5-7586c07979e8\" (UID: \"a0e2a0ec-867a-47b2-b6f5-7586c07979e8\") " Jan 22 13:00:02 crc kubenswrapper[5120]: I0122 13:00:02.905968 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a0e2a0ec-867a-47b2-b6f5-7586c07979e8-config-volume\") pod \"a0e2a0ec-867a-47b2-b6f5-7586c07979e8\" (UID: \"a0e2a0ec-867a-47b2-b6f5-7586c07979e8\") " Jan 22 13:00:02 crc kubenswrapper[5120]: I0122 13:00:02.906792 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0e2a0ec-867a-47b2-b6f5-7586c07979e8-config-volume" (OuterVolumeSpecName: "config-volume") pod "a0e2a0ec-867a-47b2-b6f5-7586c07979e8" (UID: "a0e2a0ec-867a-47b2-b6f5-7586c07979e8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 13:00:02 crc kubenswrapper[5120]: I0122 13:00:02.910456 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0e2a0ec-867a-47b2-b6f5-7586c07979e8-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a0e2a0ec-867a-47b2-b6f5-7586c07979e8" (UID: "a0e2a0ec-867a-47b2-b6f5-7586c07979e8"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 13:00:02 crc kubenswrapper[5120]: I0122 13:00:02.910845 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0e2a0ec-867a-47b2-b6f5-7586c07979e8-kube-api-access-h7258" (OuterVolumeSpecName: "kube-api-access-h7258") pod "a0e2a0ec-867a-47b2-b6f5-7586c07979e8" (UID: "a0e2a0ec-867a-47b2-b6f5-7586c07979e8"). InnerVolumeSpecName "kube-api-access-h7258". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 13:00:03 crc kubenswrapper[5120]: I0122 13:00:03.007908 5120 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a0e2a0ec-867a-47b2-b6f5-7586c07979e8-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 22 13:00:03 crc kubenswrapper[5120]: I0122 13:00:03.007970 5120 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a0e2a0ec-867a-47b2-b6f5-7586c07979e8-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 13:00:03 crc kubenswrapper[5120]: I0122 13:00:03.007988 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-h7258\" (UniqueName: \"kubernetes.io/projected/a0e2a0ec-867a-47b2-b6f5-7586c07979e8-kube-api-access-h7258\") on node \"crc\" DevicePath \"\"" Jan 22 13:00:03 crc kubenswrapper[5120]: I0122 13:00:03.566657 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484780-9vhhw" event={"ID":"a0e2a0ec-867a-47b2-b6f5-7586c07979e8","Type":"ContainerDied","Data":"d3aa07de6324edacad36defa170f4a5fe9f6dddd7a9acb0e6d6dbea04e0b82e3"} Jan 22 13:00:03 crc kubenswrapper[5120]: I0122 13:00:03.566709 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3aa07de6324edacad36defa170f4a5fe9f6dddd7a9acb0e6d6dbea04e0b82e3" Jan 22 13:00:03 crc kubenswrapper[5120]: I0122 13:00:03.566832 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484780-9vhhw" Jan 22 13:00:03 crc kubenswrapper[5120]: I0122 13:00:03.875746 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484735-6dctk"] Jan 22 13:00:03 crc kubenswrapper[5120]: I0122 13:00:03.880713 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484735-6dctk"] Jan 22 13:00:05 crc kubenswrapper[5120]: I0122 13:00:05.590660 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5445dd15-192f-4528-92eb-f9507eb342c4" path="/var/lib/kubelet/pods/5445dd15-192f-4528-92eb-f9507eb342c4/volumes" Jan 22 13:00:05 crc kubenswrapper[5120]: I0122 13:00:05.861720 5120 scope.go:117] "RemoveContainer" containerID="21cb135b3d3bfb01aa6f0319bccbb82d56dd92e0a9f8f4fb24aad8d3347005ef" Jan 22 13:00:19 crc kubenswrapper[5120]: I0122 13:00:19.748031 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484780-r76g8" event={"ID":"7e0c82bd-3880-4a7b-98d0-751c23215e35","Type":"ContainerStarted","Data":"fb871e2771aac457fc81b9af984e1983ca2af0cd30d6e3db47d021e8e567453b"} Jan 22 13:00:19 crc kubenswrapper[5120]: I0122 13:00:19.768999 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29484780-r76g8" podStartSLOduration=1.656337352 podStartE2EDuration="19.768983759s" podCreationTimestamp="2026-01-22 13:00:00 +0000 UTC" firstStartedPulling="2026-01-22 13:00:01.058180164 +0000 UTC m=+4335.802128555" lastFinishedPulling="2026-01-22 13:00:19.170826621 +0000 UTC m=+4353.914774962" observedRunningTime="2026-01-22 13:00:19.763627274 +0000 UTC m=+4354.507575625" watchObservedRunningTime="2026-01-22 13:00:19.768983759 +0000 UTC m=+4354.512932100" Jan 22 13:00:20 crc kubenswrapper[5120]: I0122 13:00:20.759469 5120 generic.go:358] "Generic (PLEG): container finished" podID="7e0c82bd-3880-4a7b-98d0-751c23215e35" containerID="fb871e2771aac457fc81b9af984e1983ca2af0cd30d6e3db47d021e8e567453b" exitCode=0 Jan 22 13:00:20 crc kubenswrapper[5120]: I0122 13:00:20.759567 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484780-r76g8" event={"ID":"7e0c82bd-3880-4a7b-98d0-751c23215e35","Type":"ContainerDied","Data":"fb871e2771aac457fc81b9af984e1983ca2af0cd30d6e3db47d021e8e567453b"} Jan 22 13:00:22 crc kubenswrapper[5120]: I0122 13:00:22.144786 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484780-r76g8" Jan 22 13:00:22 crc kubenswrapper[5120]: I0122 13:00:22.217488 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgzlp\" (UniqueName: \"kubernetes.io/projected/7e0c82bd-3880-4a7b-98d0-751c23215e35-kube-api-access-qgzlp\") pod \"7e0c82bd-3880-4a7b-98d0-751c23215e35\" (UID: \"7e0c82bd-3880-4a7b-98d0-751c23215e35\") " Jan 22 13:00:22 crc kubenswrapper[5120]: I0122 13:00:22.222641 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e0c82bd-3880-4a7b-98d0-751c23215e35-kube-api-access-qgzlp" (OuterVolumeSpecName: "kube-api-access-qgzlp") pod "7e0c82bd-3880-4a7b-98d0-751c23215e35" (UID: "7e0c82bd-3880-4a7b-98d0-751c23215e35"). InnerVolumeSpecName "kube-api-access-qgzlp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 13:00:22 crc kubenswrapper[5120]: I0122 13:00:22.320084 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qgzlp\" (UniqueName: \"kubernetes.io/projected/7e0c82bd-3880-4a7b-98d0-751c23215e35-kube-api-access-qgzlp\") on node \"crc\" DevicePath \"\"" Jan 22 13:00:22 crc kubenswrapper[5120]: I0122 13:00:22.782526 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484780-r76g8" Jan 22 13:00:22 crc kubenswrapper[5120]: I0122 13:00:22.782543 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484780-r76g8" event={"ID":"7e0c82bd-3880-4a7b-98d0-751c23215e35","Type":"ContainerDied","Data":"42a76bb118c34a83e7f77d4a0df553f87a5e5d04453bfd7d9e61b77a34444122"} Jan 22 13:00:22 crc kubenswrapper[5120]: I0122 13:00:22.783454 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42a76bb118c34a83e7f77d4a0df553f87a5e5d04453bfd7d9e61b77a34444122" Jan 22 13:00:22 crc kubenswrapper[5120]: I0122 13:00:22.840916 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29484774-q5l42"] Jan 22 13:00:22 crc kubenswrapper[5120]: I0122 13:00:22.845711 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29484774-q5l42"] Jan 22 13:00:23 crc kubenswrapper[5120]: I0122 13:00:23.591024 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38e09f33-037b-4402-b891-c7d84dca4e0c" path="/var/lib/kubelet/pods/38e09f33-037b-4402-b891-c7d84dca4e0c/volumes" Jan 22 13:00:32 crc kubenswrapper[5120]: I0122 13:00:32.113639 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 13:00:32 crc kubenswrapper[5120]: I0122 13:00:32.114215 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 13:01:01 crc kubenswrapper[5120]: I0122 13:01:01.973152 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 13:01:01 crc kubenswrapper[5120]: I0122 13:01:01.973865 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 13:01:05 crc kubenswrapper[5120]: I0122 13:01:05.923529 5120 scope.go:117] "RemoveContainer" containerID="6c22ec5cf52431656565b52791c399038ffbf4be2b60a8f90c1423eff5eb1f04" Jan 22 13:01:31 crc kubenswrapper[5120]: I0122 13:01:31.973317 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 13:01:31 crc kubenswrapper[5120]: I0122 13:01:31.974042 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 13:01:31 crc kubenswrapper[5120]: I0122 13:01:31.974135 5120 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dq269" Jan 22 13:01:31 crc kubenswrapper[5120]: I0122 13:01:31.975052 5120 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"564566f88543c243d4bce411a2a81cdc20ab4dfa6edf69e38bfddd2aaa71b1ed"} pod="openshift-machine-config-operator/machine-config-daemon-dq269" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 13:01:31 crc kubenswrapper[5120]: I0122 13:01:31.975140 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" containerID="cri-o://564566f88543c243d4bce411a2a81cdc20ab4dfa6edf69e38bfddd2aaa71b1ed" gracePeriod=600 Jan 22 13:01:32 crc kubenswrapper[5120]: I0122 13:01:32.133838 5120 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 13:01:32 crc kubenswrapper[5120]: I0122 13:01:32.680460 5120 generic.go:358] "Generic (PLEG): container finished" podID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerID="564566f88543c243d4bce411a2a81cdc20ab4dfa6edf69e38bfddd2aaa71b1ed" exitCode=0 Jan 22 13:01:32 crc kubenswrapper[5120]: I0122 13:01:32.680558 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" event={"ID":"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9","Type":"ContainerDied","Data":"564566f88543c243d4bce411a2a81cdc20ab4dfa6edf69e38bfddd2aaa71b1ed"} Jan 22 13:01:32 crc kubenswrapper[5120]: I0122 13:01:32.680746 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" event={"ID":"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9","Type":"ContainerStarted","Data":"66b3269c5b52320afd6538f2e9bbfc65ba479b93c17773ef46f5d4ccf54097d1"} Jan 22 13:01:32 crc kubenswrapper[5120]: I0122 13:01:32.680766 5120 scope.go:117] "RemoveContainer" containerID="dbf558918fffbef59164dd4f2880112da5ee7c772edfd9eec91c378b2021782e" Jan 22 13:02:00 crc kubenswrapper[5120]: I0122 13:02:00.163358 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29484782-gb27c"] Jan 22 13:02:00 crc kubenswrapper[5120]: I0122 13:02:00.164903 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7e0c82bd-3880-4a7b-98d0-751c23215e35" containerName="oc" Jan 22 13:02:00 crc kubenswrapper[5120]: I0122 13:02:00.164921 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e0c82bd-3880-4a7b-98d0-751c23215e35" containerName="oc" Jan 22 13:02:00 crc kubenswrapper[5120]: I0122 13:02:00.164952 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a0e2a0ec-867a-47b2-b6f5-7586c07979e8" containerName="collect-profiles" Jan 22 13:02:00 crc kubenswrapper[5120]: I0122 13:02:00.164981 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0e2a0ec-867a-47b2-b6f5-7586c07979e8" containerName="collect-profiles" Jan 22 13:02:00 crc kubenswrapper[5120]: I0122 13:02:00.165154 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="a0e2a0ec-867a-47b2-b6f5-7586c07979e8" containerName="collect-profiles" Jan 22 13:02:00 crc kubenswrapper[5120]: I0122 13:02:00.165177 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="7e0c82bd-3880-4a7b-98d0-751c23215e35" containerName="oc" Jan 22 13:02:00 crc kubenswrapper[5120]: I0122 13:02:00.177999 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484782-gb27c"] Jan 22 13:02:00 crc kubenswrapper[5120]: I0122 13:02:00.178269 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484782-gb27c" Jan 22 13:02:00 crc kubenswrapper[5120]: I0122 13:02:00.181590 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnj7t\" (UniqueName: \"kubernetes.io/projected/44bfe647-f6af-4128-a6d5-c44e07a88656-kube-api-access-tnj7t\") pod \"auto-csr-approver-29484782-gb27c\" (UID: \"44bfe647-f6af-4128-a6d5-c44e07a88656\") " pod="openshift-infra/auto-csr-approver-29484782-gb27c" Jan 22 13:02:00 crc kubenswrapper[5120]: I0122 13:02:00.182522 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g2chw\"" Jan 22 13:02:00 crc kubenswrapper[5120]: I0122 13:02:00.182753 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 22 13:02:00 crc kubenswrapper[5120]: I0122 13:02:00.183147 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 22 13:02:00 crc kubenswrapper[5120]: I0122 13:02:00.282460 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tnj7t\" (UniqueName: \"kubernetes.io/projected/44bfe647-f6af-4128-a6d5-c44e07a88656-kube-api-access-tnj7t\") pod \"auto-csr-approver-29484782-gb27c\" (UID: \"44bfe647-f6af-4128-a6d5-c44e07a88656\") " pod="openshift-infra/auto-csr-approver-29484782-gb27c" Jan 22 13:02:00 crc kubenswrapper[5120]: I0122 13:02:00.307716 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnj7t\" (UniqueName: \"kubernetes.io/projected/44bfe647-f6af-4128-a6d5-c44e07a88656-kube-api-access-tnj7t\") pod \"auto-csr-approver-29484782-gb27c\" (UID: \"44bfe647-f6af-4128-a6d5-c44e07a88656\") " pod="openshift-infra/auto-csr-approver-29484782-gb27c" Jan 22 13:02:00 crc kubenswrapper[5120]: I0122 13:02:00.509375 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484782-gb27c" Jan 22 13:02:00 crc kubenswrapper[5120]: I0122 13:02:00.733228 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484782-gb27c"] Jan 22 13:02:00 crc kubenswrapper[5120]: I0122 13:02:00.962515 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484782-gb27c" event={"ID":"44bfe647-f6af-4128-a6d5-c44e07a88656","Type":"ContainerStarted","Data":"bd14c05b526ae6592318df0d9e6eb248265b02b7dfa6684d29464f62a25e86c1"} Jan 22 13:02:02 crc kubenswrapper[5120]: I0122 13:02:02.980276 5120 generic.go:358] "Generic (PLEG): container finished" podID="44bfe647-f6af-4128-a6d5-c44e07a88656" containerID="8c525e7f1b5c178020eb94d211f73512158ca45430ba0508756922f0a66a75f4" exitCode=0 Jan 22 13:02:02 crc kubenswrapper[5120]: I0122 13:02:02.980475 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484782-gb27c" event={"ID":"44bfe647-f6af-4128-a6d5-c44e07a88656","Type":"ContainerDied","Data":"8c525e7f1b5c178020eb94d211f73512158ca45430ba0508756922f0a66a75f4"} Jan 22 13:02:04 crc kubenswrapper[5120]: I0122 13:02:04.273610 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484782-gb27c" Jan 22 13:02:04 crc kubenswrapper[5120]: I0122 13:02:04.345595 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnj7t\" (UniqueName: \"kubernetes.io/projected/44bfe647-f6af-4128-a6d5-c44e07a88656-kube-api-access-tnj7t\") pod \"44bfe647-f6af-4128-a6d5-c44e07a88656\" (UID: \"44bfe647-f6af-4128-a6d5-c44e07a88656\") " Jan 22 13:02:04 crc kubenswrapper[5120]: I0122 13:02:04.361053 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44bfe647-f6af-4128-a6d5-c44e07a88656-kube-api-access-tnj7t" (OuterVolumeSpecName: "kube-api-access-tnj7t") pod "44bfe647-f6af-4128-a6d5-c44e07a88656" (UID: "44bfe647-f6af-4128-a6d5-c44e07a88656"). InnerVolumeSpecName "kube-api-access-tnj7t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 13:02:04 crc kubenswrapper[5120]: I0122 13:02:04.447428 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tnj7t\" (UniqueName: \"kubernetes.io/projected/44bfe647-f6af-4128-a6d5-c44e07a88656-kube-api-access-tnj7t\") on node \"crc\" DevicePath \"\"" Jan 22 13:02:05 crc kubenswrapper[5120]: I0122 13:02:05.002250 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484782-gb27c" event={"ID":"44bfe647-f6af-4128-a6d5-c44e07a88656","Type":"ContainerDied","Data":"bd14c05b526ae6592318df0d9e6eb248265b02b7dfa6684d29464f62a25e86c1"} Jan 22 13:02:05 crc kubenswrapper[5120]: I0122 13:02:05.002948 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd14c05b526ae6592318df0d9e6eb248265b02b7dfa6684d29464f62a25e86c1" Jan 22 13:02:05 crc kubenswrapper[5120]: I0122 13:02:05.002471 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484782-gb27c" Jan 22 13:02:05 crc kubenswrapper[5120]: I0122 13:02:05.337852 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29484776-zrrxj"] Jan 22 13:02:05 crc kubenswrapper[5120]: I0122 13:02:05.342341 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29484776-zrrxj"] Jan 22 13:02:05 crc kubenswrapper[5120]: I0122 13:02:05.587825 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e601162d-810b-4cd9-a558-08f4b76f1234" path="/var/lib/kubelet/pods/e601162d-810b-4cd9-a558-08f4b76f1234/volumes" Jan 22 13:02:06 crc kubenswrapper[5120]: I0122 13:02:06.081789 5120 scope.go:117] "RemoveContainer" containerID="7512fad5ec10f0c7660abd2dd1ea5030ac807aecd713cb9dae496f30a411cff4" Jan 22 13:02:52 crc kubenswrapper[5120]: I0122 13:02:52.642555 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4lzht_67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087/kube-multus/0.log" Jan 22 13:02:52 crc kubenswrapper[5120]: I0122 13:02:52.647515 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4lzht_67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087/kube-multus/0.log" Jan 22 13:02:52 crc kubenswrapper[5120]: I0122 13:02:52.658885 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 22 13:02:52 crc kubenswrapper[5120]: I0122 13:02:52.662598 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515134420053024442 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015134420054017360 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015134406541016510 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015134406541015460 5ustar corecore